2025-02-19 08:03:54.312627 | Job console starting... 2025-02-19 08:03:54.342548 | Updating repositories 2025-02-19 08:03:54.440555 | Preparing job workspace 2025-02-19 08:03:56.152765 | Running Ansible setup... 2025-02-19 08:04:01.185272 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-02-19 08:04:01.901775 | 2025-02-19 08:04:01.901934 | PLAY [Base pre] 2025-02-19 08:04:01.939096 | 2025-02-19 08:04:01.939242 | TASK [Setup log path fact] 2025-02-19 08:04:01.972641 | orchestrator | ok 2025-02-19 08:04:01.995496 | 2025-02-19 08:04:01.995633 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-19 08:04:02.040767 | orchestrator | skipping: Conditional result was False 2025-02-19 08:04:02.055013 | 2025-02-19 08:04:02.055163 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-19 08:04:02.125268 | orchestrator | ok 2025-02-19 08:04:02.135878 | 2025-02-19 08:04:02.136030 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-19 08:04:02.191667 | orchestrator | skipping: Conditional result was False 2025-02-19 08:04:02.207712 | 2025-02-19 08:04:02.207870 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-19 08:04:02.243732 | orchestrator | skipping: Conditional result was False 2025-02-19 08:04:02.260032 | 2025-02-19 08:04:02.260179 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-19 08:04:02.285204 | orchestrator | skipping: Conditional result was False 2025-02-19 08:04:02.296461 | 2025-02-19 08:04:02.296592 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-19 08:04:02.320773 | orchestrator | skipping: Conditional result was False 2025-02-19 08:04:02.338089 | 2025-02-19 08:04:02.338200 | TASK [emit-job-header : Print job information] 2025-02-19 08:04:02.397587 | # Job Information 2025-02-19 08:04:02.397881 | Ansible Version: 2.15.3 2025-02-19 08:04:02.397935 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-02-19 08:04:02.397997 | Pipeline: post 2025-02-19 08:04:02.398032 | Executor: 7d211f194f6a 2025-02-19 08:04:02.398063 | Triggered by: https://github.com/osism/testbed/commit/85c96c20b7be9501a72636be1bcc3cf36c161e52 2025-02-19 08:04:02.398093 | Event ID: 0b0ef85c-ee98-11ef-9575-ea8e300b6615 2025-02-19 08:04:02.408183 | 2025-02-19 08:04:02.408350 | LOOP [emit-job-header : Print node information] 2025-02-19 08:04:02.556349 | orchestrator | ok: 2025-02-19 08:04:02.556598 | orchestrator | # Node Information 2025-02-19 08:04:02.556633 | orchestrator | Inventory Hostname: orchestrator 2025-02-19 08:04:02.556657 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-02-19 08:04:02.556678 | orchestrator | Username: zuul-testbed05 2025-02-19 08:04:02.556698 | orchestrator | Distro: Debian 12.9 2025-02-19 08:04:02.556717 | orchestrator | Provider: static-testbed 2025-02-19 08:04:02.556874 | orchestrator | Label: testbed-orchestrator 2025-02-19 08:04:02.556904 | orchestrator | Product Name: OpenStack Nova 2025-02-19 08:04:02.556926 | orchestrator | Interface IP: 81.163.193.140 2025-02-19 08:04:02.582015 | 2025-02-19 08:04:02.582140 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-02-19 08:04:03.090054 | orchestrator -> localhost | changed 2025-02-19 08:04:03.107912 | 2025-02-19 08:04:03.108113 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-02-19 08:04:04.173446 | orchestrator -> localhost | changed 2025-02-19 08:04:04.189286 | 2025-02-19 08:04:04.189412 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-02-19 08:04:04.481587 | orchestrator -> localhost | ok 2025-02-19 08:04:04.494595 | 2025-02-19 08:04:04.494753 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-02-19 08:04:04.528839 | orchestrator | ok 2025-02-19 08:04:04.547146 | orchestrator | included: /var/lib/zuul/builds/1b46a84af940463697c0e33a75af0ed4/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-02-19 08:04:04.556215 | 2025-02-19 08:04:04.556318 | TASK [add-build-sshkey : Create Temp SSH key] 2025-02-19 08:04:05.153109 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-02-19 08:04:05.153322 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/1b46a84af940463697c0e33a75af0ed4/work/1b46a84af940463697c0e33a75af0ed4_id_rsa 2025-02-19 08:04:05.153356 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/1b46a84af940463697c0e33a75af0ed4/work/1b46a84af940463697c0e33a75af0ed4_id_rsa.pub 2025-02-19 08:04:05.153380 | orchestrator -> localhost | The key fingerprint is: 2025-02-19 08:04:05.153402 | orchestrator -> localhost | SHA256:V0fCjHBIIwYJxSHi6C8XsBjNxjiaMENcsYzzsF2aVE8 zuul-build-sshkey 2025-02-19 08:04:05.153424 | orchestrator -> localhost | The key's randomart image is: 2025-02-19 08:04:05.153448 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-02-19 08:04:05.153469 | orchestrator -> localhost | |o.+*==oE+o.+. . | 2025-02-19 08:04:05.153489 | orchestrator -> localhost | |+Bo.=.o..o. oo | 2025-02-19 08:04:05.153508 | orchestrator -> localhost | |O*=+ . . . . | 2025-02-19 08:04:05.153527 | orchestrator -> localhost | |=BX + . . | 2025-02-19 08:04:05.153546 | orchestrator -> localhost | |++ * S . | 2025-02-19 08:04:05.153564 | orchestrator -> localhost | | . . . | 2025-02-19 08:04:05.153583 | orchestrator -> localhost | | . o | 2025-02-19 08:04:05.153602 | orchestrator -> localhost | | o | 2025-02-19 08:04:05.153621 | orchestrator -> localhost | | | 2025-02-19 08:04:05.153640 | orchestrator -> localhost | +----[SHA256]-----+ 2025-02-19 08:04:05.153683 | orchestrator -> localhost | ok: Runtime: 0:00:00.094630 2025-02-19 08:04:05.162548 | 2025-02-19 08:04:05.162666 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-02-19 08:04:05.205586 | orchestrator | ok 2025-02-19 08:04:05.217754 | orchestrator | included: /var/lib/zuul/builds/1b46a84af940463697c0e33a75af0ed4/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-02-19 08:04:05.228992 | 2025-02-19 08:04:05.229099 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-02-19 08:04:05.263954 | orchestrator | skipping: Conditional result was False 2025-02-19 08:04:05.274049 | 2025-02-19 08:04:05.274167 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-02-19 08:04:05.885962 | orchestrator | changed 2025-02-19 08:04:05.898962 | 2025-02-19 08:04:05.899150 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-02-19 08:04:06.188219 | orchestrator | ok 2025-02-19 08:04:06.200241 | 2025-02-19 08:04:06.200370 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-02-19 08:04:06.614247 | orchestrator | ok 2025-02-19 08:04:06.631036 | 2025-02-19 08:04:06.631203 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-02-19 08:04:07.062382 | orchestrator | ok 2025-02-19 08:04:07.081273 | 2025-02-19 08:04:07.081462 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-02-19 08:04:07.108672 | orchestrator | skipping: Conditional result was False 2025-02-19 08:04:07.118883 | 2025-02-19 08:04:07.119022 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-02-19 08:04:07.545732 | orchestrator -> localhost | changed 2025-02-19 08:04:07.573735 | 2025-02-19 08:04:07.573880 | TASK [add-build-sshkey : Add back temp key] 2025-02-19 08:04:07.905608 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/1b46a84af940463697c0e33a75af0ed4/work/1b46a84af940463697c0e33a75af0ed4_id_rsa (zuul-build-sshkey) 2025-02-19 08:04:07.905851 | orchestrator -> localhost | ok: Runtime: 0:00:00.009526 2025-02-19 08:04:07.914775 | 2025-02-19 08:04:07.914895 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-02-19 08:04:08.327766 | orchestrator | ok 2025-02-19 08:04:08.338345 | 2025-02-19 08:04:08.338480 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-02-19 08:04:08.373640 | orchestrator | skipping: Conditional result was False 2025-02-19 08:04:08.396259 | 2025-02-19 08:04:08.396434 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-02-19 08:04:08.797706 | orchestrator | ok 2025-02-19 08:04:08.814241 | 2025-02-19 08:04:08.814364 | TASK [validate-host : Define zuul_info_dir fact] 2025-02-19 08:04:08.846119 | orchestrator | ok 2025-02-19 08:04:08.854180 | 2025-02-19 08:04:08.854305 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-02-19 08:04:09.144737 | orchestrator -> localhost | ok 2025-02-19 08:04:09.153888 | 2025-02-19 08:04:09.154021 | TASK [validate-host : Collect information about the host] 2025-02-19 08:04:10.388240 | orchestrator | ok 2025-02-19 08:04:10.405052 | 2025-02-19 08:04:10.405186 | TASK [validate-host : Sanitize hostname] 2025-02-19 08:04:10.484736 | orchestrator | ok 2025-02-19 08:04:10.493539 | 2025-02-19 08:04:10.493656 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-02-19 08:04:11.085766 | orchestrator -> localhost | changed 2025-02-19 08:04:11.094179 | 2025-02-19 08:04:11.094304 | TASK [validate-host : Collect information about zuul worker] 2025-02-19 08:04:11.645992 | orchestrator | ok 2025-02-19 08:04:11.656299 | 2025-02-19 08:04:11.656477 | TASK [validate-host : Write out all zuul information for each host] 2025-02-19 08:04:12.225373 | orchestrator -> localhost | changed 2025-02-19 08:04:12.241375 | 2025-02-19 08:04:12.241515 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-02-19 08:04:12.536335 | orchestrator | ok 2025-02-19 08:04:12.547193 | 2025-02-19 08:04:12.547340 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-02-19 08:05:20.751336 | orchestrator | changed: 2025-02-19 08:05:20.751500 | orchestrator | .d..t...... src/ 2025-02-19 08:05:20.751534 | orchestrator | .d..t...... src/github.com/ 2025-02-19 08:05:20.751557 | orchestrator | .d..t...... src/github.com/osism/ 2025-02-19 08:05:20.751579 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-02-19 08:05:20.751599 | orchestrator | RedHat.yml 2025-02-19 08:05:20.769665 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-02-19 08:05:20.769683 | orchestrator | RedHat.yml 2025-02-19 08:05:20.769735 | orchestrator | = 2.2.0"... 2025-02-19 08:05:33.333599 | orchestrator | 08:05:33.333 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-02-19 08:05:33.392264 | orchestrator | 08:05:33.392 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-02-19 08:05:34.350174 | orchestrator | 08:05:34.349 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-02-19 08:05:35.218344 | orchestrator | 08:05:35.217 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-02-19 08:05:36.754367 | orchestrator | 08:05:36.754 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-02-19 08:05:38.099342 | orchestrator | 08:05:38.098 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-02-19 08:05:39.031943 | orchestrator | 08:05:39.031 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-02-19 08:05:39.880234 | orchestrator | 08:05:39.879 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-02-19 08:05:39.880342 | orchestrator | 08:05:39.880 STDOUT terraform: Providers are signed by their developers. 2025-02-19 08:05:39.880368 | orchestrator | 08:05:39.880 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-02-19 08:05:39.880469 | orchestrator | 08:05:39.880 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-02-19 08:05:39.880718 | orchestrator | 08:05:39.880 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-02-19 08:05:39.880867 | orchestrator | 08:05:39.880 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-02-19 08:05:39.880998 | orchestrator | 08:05:39.880 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-02-19 08:05:39.881085 | orchestrator | 08:05:39.880 STDOUT terraform: you run "tofu init" in the future. 2025-02-19 08:05:39.881207 | orchestrator | 08:05:39.881 STDOUT terraform: OpenTofu has been successfully initialized! 2025-02-19 08:05:39.881364 | orchestrator | 08:05:39.881 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-02-19 08:05:39.881524 | orchestrator | 08:05:39.881 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-02-19 08:05:39.881598 | orchestrator | 08:05:39.881 STDOUT terraform: should now work. 2025-02-19 08:05:39.881685 | orchestrator | 08:05:39.881 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-02-19 08:05:39.881803 | orchestrator | 08:05:39.881 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-02-19 08:05:39.881911 | orchestrator | 08:05:39.881 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-02-19 08:05:40.443110 | orchestrator | 08:05:40.442 STDOUT terraform: Created and switched to workspace "ci"! 2025-02-19 08:05:40.688167 | orchestrator | 08:05:40.442 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-02-19 08:05:40.688255 | orchestrator | 08:05:40.442 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-02-19 08:05:40.688265 | orchestrator | 08:05:40.442 STDOUT terraform: for this configuration. 2025-02-19 08:05:40.688284 | orchestrator | 08:05:40.687 STDOUT terraform: ci.auto.tfvars 2025-02-19 08:05:41.811409 | orchestrator | 08:05:41.811 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-02-19 08:05:42.383476 | orchestrator | 08:05:42.383 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-02-19 08:05:42.614527 | orchestrator | 08:05:42.614 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-02-19 08:05:42.614612 | orchestrator | 08:05:42.614 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-02-19 08:05:42.614624 | orchestrator | 08:05:42.614 STDOUT terraform:  + create 2025-02-19 08:05:42.614716 | orchestrator | 08:05:42.614 STDOUT terraform:  <= read (data resources) 2025-02-19 08:05:42.614727 | orchestrator | 08:05:42.614 STDOUT terraform: OpenTofu will perform the following actions: 2025-02-19 08:05:42.615103 | orchestrator | 08:05:42.614 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-02-19 08:05:42.615154 | orchestrator | 08:05:42.615 STDOUT terraform:  # (config refers to values not yet known) 2025-02-19 08:05:42.615220 | orchestrator | 08:05:42.615 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-02-19 08:05:42.615253 | orchestrator | 08:05:42.615 STDOUT terraform:  + checksum = (known after apply) 2025-02-19 08:05:42.615305 | orchestrator | 08:05:42.615 STDOUT terraform:  + created_at = (known after apply) 2025-02-19 08:05:42.615370 | orchestrator | 08:05:42.615 STDOUT terraform:  + file = (known after apply) 2025-02-19 08:05:42.615407 | orchestrator | 08:05:42.615 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.615463 | orchestrator | 08:05:42.615 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.615504 | orchestrator | 08:05:42.615 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-02-19 08:05:42.615565 | orchestrator | 08:05:42.615 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-02-19 08:05:42.615586 | orchestrator | 08:05:42.615 STDOUT terraform:  + most_recent = true 2025-02-19 08:05:42.615647 | orchestrator | 08:05:42.615 STDOUT terraform:  + name = (known after apply) 2025-02-19 08:05:42.615684 | orchestrator | 08:05:42.615 STDOUT terraform:  + protected = (known after apply) 2025-02-19 08:05:42.615733 | orchestrator | 08:05:42.615 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.615795 | orchestrator | 08:05:42.615 STDOUT terraform:  + schema = (known after apply) 2025-02-19 08:05:42.615832 | orchestrator | 08:05:42.615 STDOUT terraform:  + size_bytes = (known after apply) 2025-02-19 08:05:42.615897 | orchestrator | 08:05:42.615 STDOUT terraform:  + tags = (known after apply) 2025-02-19 08:05:42.615942 | orchestrator | 08:05:42.615 STDOUT terraform:  + updated_at = (known after apply) 2025-02-19 08:05:42.615953 | orchestrator | 08:05:42.615 STDOUT terraform:  } 2025-02-19 08:05:42.616073 | orchestrator | 08:05:42.615 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-02-19 08:05:42.616115 | orchestrator | 08:05:42.616 STDOUT terraform:  # (config refers to values not yet known) 2025-02-19 08:05:42.616177 | orchestrator | 08:05:42.616 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-02-19 08:05:42.616221 | orchestrator | 08:05:42.616 STDOUT terraform:  + checksum = (known after apply) 2025-02-19 08:05:42.616273 | orchestrator | 08:05:42.616 STDOUT terraform:  + created_at = (known after apply) 2025-02-19 08:05:42.616327 | orchestrator | 08:05:42.616 STDOUT terraform:  + file = (known after apply) 2025-02-19 08:05:42.616370 | orchestrator | 08:05:42.616 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.616430 | orchestrator | 08:05:42.616 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.616466 | orchestrator | 08:05:42.616 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-02-19 08:05:42.616526 | orchestrator | 08:05:42.616 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-02-19 08:05:42.616554 | orchestrator | 08:05:42.616 STDOUT terraform:  + most_recent = true 2025-02-19 08:05:42.616602 | orchestrator | 08:05:42.616 STDOUT terraform:  + name = (known after apply) 2025-02-19 08:05:42.616651 | orchestrator | 08:05:42.616 STDOUT terraform:  + protected = (known after apply) 2025-02-19 08:05:42.616712 | orchestrator | 08:05:42.616 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.616751 | orchestrator | 08:05:42.616 STDOUT terraform:  + schema = (known after apply) 2025-02-19 08:05:42.616800 | orchestrator | 08:05:42.616 STDOUT terraform:  + size_bytes = (known after apply) 2025-02-19 08:05:42.616847 | orchestrator | 08:05:42.616 STDOUT terraform:  + tags = (known after apply) 2025-02-19 08:05:42.616897 | orchestrator | 08:05:42.616 STDOUT terraform:  + updated_at = (known after apply) 2025-02-19 08:05:42.616919 | orchestrator | 08:05:42.616 STDOUT terraform:  } 2025-02-19 08:05:42.616986 | orchestrator | 08:05:42.616 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-02-19 08:05:42.617050 | orchestrator | 08:05:42.616 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-02-19 08:05:42.617113 | orchestrator | 08:05:42.617 STDOUT terraform:  + content = (known after apply) 2025-02-19 08:05:42.617179 | orchestrator | 08:05:42.617 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-19 08:05:42.617228 | orchestrator | 08:05:42.617 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-19 08:05:42.617291 | orchestrator | 08:05:42.617 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-19 08:05:42.617364 | orchestrator | 08:05:42.617 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-19 08:05:42.617413 | orchestrator | 08:05:42.617 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-19 08:05:42.617471 | orchestrator | 08:05:42.617 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-19 08:05:42.617520 | orchestrator | 08:05:42.617 STDOUT terraform:  + directory_permission = "0777" 2025-02-19 08:05:42.617553 | orchestrator | 08:05:42.617 STDOUT terraform:  + file_permission = "0644" 2025-02-19 08:05:42.617623 | orchestrator | 08:05:42.617 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-02-19 08:05:42.617674 | orchestrator | 08:05:42.617 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.617692 | orchestrator | 08:05:42.617 STDOUT terraform:  } 2025-02-19 08:05:42.617739 | orchestrator | 08:05:42.617 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-02-19 08:05:42.617793 | orchestrator | 08:05:42.617 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-02-19 08:05:42.617846 | orchestrator | 08:05:42.617 STDOUT terraform:  + content = (known after apply) 2025-02-19 08:05:42.617906 | orchestrator | 08:05:42.617 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-19 08:05:42.617965 | orchestrator | 08:05:42.617 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-19 08:05:42.618085 | orchestrator | 08:05:42.617 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-19 08:05:42.618150 | orchestrator | 08:05:42.618 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-19 08:05:42.618216 | orchestrator | 08:05:42.618 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-19 08:05:42.618268 | orchestrator | 08:05:42.618 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-19 08:05:42.618316 | orchestrator | 08:05:42.618 STDOUT terraform:  + directory_permission = "0777" 2025-02-19 08:05:42.618349 | orchestrator | 08:05:42.618 STDOUT terraform:  + file_permission = "0644" 2025-02-19 08:05:42.618403 | orchestrator | 08:05:42.618 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-02-19 08:05:42.618471 | orchestrator | 08:05:42.618 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.618480 | orchestrator | 08:05:42.618 STDOUT terraform:  } 2025-02-19 08:05:42.618519 | orchestrator | 08:05:42.618 STDOUT terraform:  # local_file.inventory will be created 2025-02-19 08:05:42.618565 | orchestrator | 08:05:42.618 STDOUT terraform:  + resource "local_file" "inventory" { 2025-02-19 08:05:42.618615 | orchestrator | 08:05:42.618 STDOUT terraform:  + content = (known after apply) 2025-02-19 08:05:42.618669 | orchestrator | 08:05:42.618 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-19 08:05:42.618732 | orchestrator | 08:05:42.618 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-19 08:05:42.618798 | orchestrator | 08:05:42.618 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-19 08:05:42.618860 | orchestrator | 08:05:42.618 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-19 08:05:42.618941 | orchestrator | 08:05:42.618 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-19 08:05:42.618980 | orchestrator | 08:05:42.618 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-19 08:05:42.619021 | orchestrator | 08:05:42.618 STDOUT terraform:  + directory_permission = "0777" 2025-02-19 08:05:42.619053 | orchestrator | 08:05:42.619 STDOUT terraform:  + file_permission = "0644" 2025-02-19 08:05:42.619103 | orchestrator | 08:05:42.619 STDOUT terraform:  + filename = "inventory.ci" 2025-02-19 08:05:42.619160 | orchestrator | 08:05:42.619 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.619177 | orchestrator | 08:05:42.619 STDOUT terraform:  } 2025-02-19 08:05:42.619232 | orchestrator | 08:05:42.619 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-02-19 08:05:42.619270 | orchestrator | 08:05:42.619 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-02-19 08:05:42.619321 | orchestrator | 08:05:42.619 STDOUT terraform:  + content = (sensitive value) 2025-02-19 08:05:42.619377 | orchestrator | 08:05:42.619 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-19 08:05:42.619436 | orchestrator | 08:05:42.619 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-19 08:05:42.619489 | orchestrator | 08:05:42.619 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-19 08:05:42.619545 | orchestrator | 08:05:42.619 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-19 08:05:42.619600 | orchestrator | 08:05:42.619 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-19 08:05:42.619655 | orchestrator | 08:05:42.619 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-19 08:05:42.619701 | orchestrator | 08:05:42.619 STDOUT terraform:  + directory_permission = "0700" 2025-02-19 08:05:42.619735 | orchestrator | 08:05:42.619 STDOUT terraform:  + file_permission = "0600" 2025-02-19 08:05:42.619778 | orchestrator | 08:05:42.619 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-02-19 08:05:42.619833 | orchestrator | 08:05:42.619 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.619855 | orchestrator | 08:05:42.619 STDOUT terraform:  } 2025-02-19 08:05:42.619902 | orchestrator | 08:05:42.619 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-02-19 08:05:42.619948 | orchestrator | 08:05:42.619 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-02-19 08:05:42.619982 | orchestrator | 08:05:42.619 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.620017 | orchestrator | 08:05:42.619 STDOUT terraform:  } 2025-02-19 08:05:42.620113 | orchestrator | 08:05:42.619 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-02-19 08:05:42.620185 | orchestrator | 08:05:42.620 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-02-19 08:05:42.620232 | orchestrator | 08:05:42.620 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.620266 | orchestrator | 08:05:42.620 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.620314 | orchestrator | 08:05:42.620 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.620363 | orchestrator | 08:05:42.620 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.620412 | orchestrator | 08:05:42.620 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.620473 | orchestrator | 08:05:42.620 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-02-19 08:05:42.620519 | orchestrator | 08:05:42.620 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.620554 | orchestrator | 08:05:42.620 STDOUT terraform:  + size = 80 2025-02-19 08:05:42.620588 | orchestrator | 08:05:42.620 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.620603 | orchestrator | 08:05:42.620 STDOUT terraform:  } 2025-02-19 08:05:42.620668 | orchestrator | 08:05:42.620 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-02-19 08:05:42.620730 | orchestrator | 08:05:42.620 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-19 08:05:42.620772 | orchestrator | 08:05:42.620 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.620801 | orchestrator | 08:05:42.620 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.620845 | orchestrator | 08:05:42.620 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.620888 | orchestrator | 08:05:42.620 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.620930 | orchestrator | 08:05:42.620 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.620983 | orchestrator | 08:05:42.620 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-02-19 08:05:42.621038 | orchestrator | 08:05:42.620 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.621067 | orchestrator | 08:05:42.621 STDOUT terraform:  + size = 80 2025-02-19 08:05:42.621095 | orchestrator | 08:05:42.621 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.621115 | orchestrator | 08:05:42.621 STDOUT terraform:  } 2025-02-19 08:05:42.621254 | orchestrator | 08:05:42.621 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-02-19 08:05:42.621347 | orchestrator | 08:05:42.621 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-19 08:05:42.621391 | orchestrator | 08:05:42.621 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.621421 | orchestrator | 08:05:42.621 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.621469 | orchestrator | 08:05:42.621 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.621510 | orchestrator | 08:05:42.621 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.621552 | orchestrator | 08:05:42.621 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.621609 | orchestrator | 08:05:42.621 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-02-19 08:05:42.621655 | orchestrator | 08:05:42.621 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.621683 | orchestrator | 08:05:42.621 STDOUT terraform:  + size = 80 2025-02-19 08:05:42.621711 | orchestrator | 08:05:42.621 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.621729 | orchestrator | 08:05:42.621 STDOUT terraform:  } 2025-02-19 08:05:42.621794 | orchestrator | 08:05:42.621 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-02-19 08:05:42.621856 | orchestrator | 08:05:42.621 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-19 08:05:42.621898 | orchestrator | 08:05:42.621 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.621927 | orchestrator | 08:05:42.621 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.621968 | orchestrator | 08:05:42.621 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.622196 | orchestrator | 08:05:42.621 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.622225 | orchestrator | 08:05:42.622 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.622231 | orchestrator | 08:05:42.622 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-02-19 08:05:42.622237 | orchestrator | 08:05:42.622 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.622243 | orchestrator | 08:05:42.622 STDOUT terraform:  + size = 80 2025-02-19 08:05:42.622250 | orchestrator | 08:05:42.622 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.622303 | orchestrator | 08:05:42.622 STDOUT terraform:  } 2025-02-19 08:05:42.622311 | orchestrator | 08:05:42.622 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-02-19 08:05:42.622373 | orchestrator | 08:05:42.622 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-19 08:05:42.622411 | orchestrator | 08:05:42.622 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.622441 | orchestrator | 08:05:42.622 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.622485 | orchestrator | 08:05:42.622 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.622527 | orchestrator | 08:05:42.622 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.622569 | orchestrator | 08:05:42.622 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.622620 | orchestrator | 08:05:42.622 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-02-19 08:05:42.622662 | orchestrator | 08:05:42.622 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.622691 | orchestrator | 08:05:42.622 STDOUT terraform:  + size = 80 2025-02-19 08:05:42.622719 | orchestrator | 08:05:42.622 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.622737 | orchestrator | 08:05:42.622 STDOUT terraform:  } 2025-02-19 08:05:42.622804 | orchestrator | 08:05:42.622 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-02-19 08:05:42.622871 | orchestrator | 08:05:42.622 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-19 08:05:42.622915 | orchestrator | 08:05:42.622 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.622941 | orchestrator | 08:05:42.622 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.622986 | orchestrator | 08:05:42.622 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.623042 | orchestrator | 08:05:42.622 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.623084 | orchestrator | 08:05:42.623 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.623136 | orchestrator | 08:05:42.623 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-02-19 08:05:42.623203 | orchestrator | 08:05:42.623 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.623246 | orchestrator | 08:05:42.623 STDOUT terraform:  + size = 80 2025-02-19 08:05:42.623294 | orchestrator | 08:05:42.623 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.623325 | orchestrator | 08:05:42.623 STDOUT terraform:  } 2025-02-19 08:05:42.623395 | orchestrator | 08:05:42.623 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-02-19 08:05:42.623476 | orchestrator | 08:05:42.623 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-19 08:05:42.623545 | orchestrator | 08:05:42.623 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.623594 | orchestrator | 08:05:42.623 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.623668 | orchestrator | 08:05:42.623 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.623743 | orchestrator | 08:05:42.623 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.623817 | orchestrator | 08:05:42.623 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.623877 | orchestrator | 08:05:42.623 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-02-19 08:05:42.623948 | orchestrator | 08:05:42.623 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.623982 | orchestrator | 08:05:42.623 STDOUT terraform:  + size = 80 2025-02-19 08:05:42.624055 | orchestrator | 08:05:42.623 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.624083 | orchestrator | 08:05:42.624 STDOUT terraform:  } 2025-02-19 08:05:42.624159 | orchestrator | 08:05:42.624 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-02-19 08:05:42.624303 | orchestrator | 08:05:42.624 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.624329 | orchestrator | 08:05:42.624 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.624370 | orchestrator | 08:05:42.624 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.624412 | orchestrator | 08:05:42.624 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.624452 | orchestrator | 08:05:42.624 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.624498 | orchestrator | 08:05:42.624 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-02-19 08:05:42.624543 | orchestrator | 08:05:42.624 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.624569 | orchestrator | 08:05:42.624 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.624593 | orchestrator | 08:05:42.624 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.624604 | orchestrator | 08:05:42.624 STDOUT terraform:  } 2025-02-19 08:05:42.624662 | orchestrator | 08:05:42.624 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-02-19 08:05:42.624717 | orchestrator | 08:05:42.624 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.624767 | orchestrator | 08:05:42.624 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.624794 | orchestrator | 08:05:42.624 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.624850 | orchestrator | 08:05:42.624 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.624864 | orchestrator | 08:05:42.624 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.624916 | orchestrator | 08:05:42.624 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-02-19 08:05:42.624954 | orchestrator | 08:05:42.624 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.624982 | orchestrator | 08:05:42.624 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.625025 | orchestrator | 08:05:42.624 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.625032 | orchestrator | 08:05:42.625 STDOUT terraform:  } 2025-02-19 08:05:42.625090 | orchestrator | 08:05:42.625 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-02-19 08:05:42.625144 | orchestrator | 08:05:42.625 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.625183 | orchestrator | 08:05:42.625 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.625209 | orchestrator | 08:05:42.625 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.625249 | orchestrator | 08:05:42.625 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.625285 | orchestrator | 08:05:42.625 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.625331 | orchestrator | 08:05:42.625 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-02-19 08:05:42.625369 | orchestrator | 08:05:42.625 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.625395 | orchestrator | 08:05:42.625 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.625421 | orchestrator | 08:05:42.625 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.625428 | orchestrator | 08:05:42.625 STDOUT terraform:  } 2025-02-19 08:05:42.625487 | orchestrator | 08:05:42.625 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-02-19 08:05:42.625543 | orchestrator | 08:05:42.625 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.625582 | orchestrator | 08:05:42.625 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.625602 | orchestrator | 08:05:42.625 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.625639 | orchestrator | 08:05:42.625 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.625677 | orchestrator | 08:05:42.625 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.625724 | orchestrator | 08:05:42.625 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-02-19 08:05:42.625763 | orchestrator | 08:05:42.625 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.625790 | orchestrator | 08:05:42.625 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.625816 | orchestrator | 08:05:42.625 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.625823 | orchestrator | 08:05:42.625 STDOUT terraform:  } 2025-02-19 08:05:42.625885 | orchestrator | 08:05:42.625 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-02-19 08:05:42.625938 | orchestrator | 08:05:42.625 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.625977 | orchestrator | 08:05:42.625 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.626036 | orchestrator | 08:05:42.625 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.626071 | orchestrator | 08:05:42.625 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.626109 | orchestrator | 08:05:42.626 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.626158 | orchestrator | 08:05:42.626 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-02-19 08:05:42.626199 | orchestrator | 08:05:42.626 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.626226 | orchestrator | 08:05:42.626 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.626252 | orchestrator | 08:05:42.626 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.626259 | orchestrator | 08:05:42.626 STDOUT terraform:  } 2025-02-19 08:05:42.626319 | orchestrator | 08:05:42.626 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-02-19 08:05:42.626371 | orchestrator | 08:05:42.626 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.626409 | orchestrator | 08:05:42.626 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.626435 | orchestrator | 08:05:42.626 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.626474 | orchestrator | 08:05:42.626 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.626515 | orchestrator | 08:05:42.626 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.626560 | orchestrator | 08:05:42.626 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-02-19 08:05:42.626601 | orchestrator | 08:05:42.626 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.626613 | orchestrator | 08:05:42.626 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.626644 | orchestrator | 08:05:42.626 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.626652 | orchestrator | 08:05:42.626 STDOUT terraform:  } 2025-02-19 08:05:42.626710 | orchestrator | 08:05:42.626 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-02-19 08:05:42.626764 | orchestrator | 08:05:42.626 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.626803 | orchestrator | 08:05:42.626 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.626829 | orchestrator | 08:05:42.626 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.626867 | orchestrator | 08:05:42.626 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.626906 | orchestrator | 08:05:42.626 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.626953 | orchestrator | 08:05:42.626 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-02-19 08:05:42.626990 | orchestrator | 08:05:42.626 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.627178 | orchestrator | 08:05:42.626 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.627271 | orchestrator | 08:05:42.627 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.627326 | orchestrator | 08:05:42.627 STDOUT terraform:  } 2025-02-19 08:05:42.627352 | orchestrator | 08:05:42.627 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-02-19 08:05:42.627398 | orchestrator | 08:05:42.627 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.627415 | orchestrator | 08:05:42.627 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.627429 | orchestrator | 08:05:42.627 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.627442 | orchestrator | 08:05:42.627 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.627455 | orchestrator | 08:05:42.627 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.627468 | orchestrator | 08:05:42.627 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-02-19 08:05:42.627481 | orchestrator | 08:05:42.627 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.627497 | orchestrator | 08:05:42.627 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.627556 | orchestrator | 08:05:42.627 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.627571 | orchestrator | 08:05:42.627 STDOUT terraform:  } 2025-02-19 08:05:42.627587 | orchestrator | 08:05:42.627 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-02-19 08:05:42.627603 | orchestrator | 08:05:42.627 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.627653 | orchestrator | 08:05:42.627 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.627676 | orchestrator | 08:05:42.627 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.627694 | orchestrator | 08:05:42.627 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.627715 | orchestrator | 08:05:42.627 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.627731 | orchestrator | 08:05:42.627 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-02-19 08:05:42.627747 | orchestrator | 08:05:42.627 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.627763 | orchestrator | 08:05:42.627 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.627803 | orchestrator | 08:05:42.627 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.627874 | orchestrator | 08:05:42.627 STDOUT terraform:  } 2025-02-19 08:05:42.627905 | orchestrator | 08:05:42.627 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-02-19 08:05:42.627971 | orchestrator | 08:05:42.627 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.628039 | orchestrator | 08:05:42.627 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.628062 | orchestrator | 08:05:42.627 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.628075 | orchestrator | 08:05:42.627 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.628092 | orchestrator | 08:05:42.627 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.628144 | orchestrator | 08:05:42.628 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-02-19 08:05:42.628174 | orchestrator | 08:05:42.628 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.628188 | orchestrator | 08:05:42.628 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.628201 | orchestrator | 08:05:42.628 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.628217 | orchestrator | 08:05:42.628 STDOUT terraform:  } 2025-02-19 08:05:42.628233 | orchestrator | 08:05:42.628 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-02-19 08:05:42.628296 | orchestrator | 08:05:42.628 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.628314 | orchestrator | 08:05:42.628 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.628352 | orchestrator | 08:05:42.628 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.628369 | orchestrator | 08:05:42.628 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.628423 | orchestrator | 08:05:42.628 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.628470 | orchestrator | 08:05:42.628 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-02-19 08:05:42.628488 | orchestrator | 08:05:42.628 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.628536 | orchestrator | 08:05:42.628 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.628551 | orchestrator | 08:05:42.628 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.628567 | orchestrator | 08:05:42.628 STDOUT terraform:  } 2025-02-19 08:05:42.628605 | orchestrator | 08:05:42.628 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-02-19 08:05:42.628656 | orchestrator | 08:05:42.628 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.628707 | orchestrator | 08:05:42.628 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.628721 | orchestrator | 08:05:42.628 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.628737 | orchestrator | 08:05:42.628 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.628778 | orchestrator | 08:05:42.628 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.628826 | orchestrator | 08:05:42.628 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-02-19 08:05:42.628843 | orchestrator | 08:05:42.628 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.628860 | orchestrator | 08:05:42.628 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.628875 | orchestrator | 08:05:42.628 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.628891 | orchestrator | 08:05:42.628 STDOUT terraform:  } 2025-02-19 08:05:42.628952 | orchestrator | 08:05:42.628 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-02-19 08:05:42.629020 | orchestrator | 08:05:42.628 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.629039 | orchestrator | 08:05:42.628 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.629063 | orchestrator | 08:05:42.629 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.629103 | orchestrator | 08:05:42.629 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.629120 | orchestrator | 08:05:42.629 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.629175 | orchestrator | 08:05:42.629 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-02-19 08:05:42.629192 | orchestrator | 08:05:42.629 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.629241 | orchestrator | 08:05:42.629 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.629256 | orchestrator | 08:05:42.629 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.629271 | orchestrator | 08:05:42.629 STDOUT terraform:  } 2025-02-19 08:05:42.629309 | orchestrator | 08:05:42.629 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-02-19 08:05:42.629361 | orchestrator | 08:05:42.629 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.629378 | orchestrator | 08:05:42.629 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.629416 | orchestrator | 08:05:42.629 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.629432 | orchestrator | 08:05:42.629 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.629482 | orchestrator | 08:05:42.629 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.629520 | orchestrator | 08:05:42.629 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-02-19 08:05:42.629550 | orchestrator | 08:05:42.629 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.629567 | orchestrator | 08:05:42.629 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.629582 | orchestrator | 08:05:42.629 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.629598 | orchestrator | 08:05:42.629 STDOUT terraform:  } 2025-02-19 08:05:42.629662 | orchestrator | 08:05:42.629 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-02-19 08:05:42.629713 | orchestrator | 08:05:42.629 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.629730 | orchestrator | 08:05:42.629 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.629778 | orchestrator | 08:05:42.629 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.629795 | orchestrator | 08:05:42.629 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.629832 | orchestrator | 08:05:42.629 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.629871 | orchestrator | 08:05:42.629 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-02-19 08:05:42.629909 | orchestrator | 08:05:42.629 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.629945 | orchestrator | 08:05:42.629 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.629961 | orchestrator | 08:05:42.629 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.630150 | orchestrator | 08:05:42.629 STDOUT terraform:  } 2025-02-19 08:05:42.630180 | orchestrator | 08:05:42.629 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-02-19 08:05:42.630230 | orchestrator | 08:05:42.629 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.630245 | orchestrator | 08:05:42.630 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.630258 | orchestrator | 08:05:42.630 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.630270 | orchestrator | 08:05:42.630 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.630291 | orchestrator | 08:05:42.630 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.630305 | orchestrator | 08:05:42.630 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-02-19 08:05:42.630317 | orchestrator | 08:05:42.630 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.630333 | orchestrator | 08:05:42.630 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.630460 | orchestrator | 08:05:42.630 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.630490 | orchestrator | 08:05:42.630 STDOUT terraform:  } 2025-02-19 08:05:42.630501 | orchestrator | 08:05:42.630 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-02-19 08:05:42.630528 | orchestrator | 08:05:42.630 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.630535 | orchestrator | 08:05:42.630 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.630540 | orchestrator | 08:05:42.630 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.630547 | orchestrator | 08:05:42.630 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.630554 | orchestrator | 08:05:42.630 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.630605 | orchestrator | 08:05:42.630 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-02-19 08:05:42.630644 | orchestrator | 08:05:42.630 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.630655 | orchestrator | 08:05:42.630 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.630682 | orchestrator | 08:05:42.630 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.630690 | orchestrator | 08:05:42.630 STDOUT terraform:  } 2025-02-19 08:05:42.630748 | orchestrator | 08:05:42.630 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-02-19 08:05:42.630798 | orchestrator | 08:05:42.630 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-19 08:05:42.630834 | orchestrator | 08:05:42.630 STDOUT terraform:  + attachment = (known after apply) 2025-02-19 08:05:42.630858 | orchestrator | 08:05:42.630 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.630896 | orchestrator | 08:05:42.630 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.630931 | orchestrator | 08:05:42.630 STDOUT terraform:  + metadata = (known after apply) 2025-02-19 08:05:42.630974 | orchestrator | 08:05:42.630 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-02-19 08:05:42.631029 | orchestrator | 08:05:42.630 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.631056 | orchestrator | 08:05:42.631 STDOUT terraform:  + size = 20 2025-02-19 08:05:42.631081 | orchestrator | 08:05:42.631 STDOUT terraform:  + volume_type = "ssd" 2025-02-19 08:05:42.631088 | orchestrator | 08:05:42.631 STDOUT terraform:  } 2025-02-19 08:05:42.631143 | orchestrator | 08:05:42.631 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-02-19 08:05:42.631195 | orchestrator | 08:05:42.631 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-02-19 08:05:42.631235 | orchestrator | 08:05:42.631 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-19 08:05:42.631274 | orchestrator | 08:05:42.631 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-19 08:05:42.631315 | orchestrator | 08:05:42.631 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-19 08:05:42.631355 | orchestrator | 08:05:42.631 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.631383 | orchestrator | 08:05:42.631 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.631408 | orchestrator | 08:05:42.631 STDOUT terraform:  + config_drive = true 2025-02-19 08:05:42.631447 | orchestrator | 08:05:42.631 STDOUT terraform:  + created = (known after apply) 2025-02-19 08:05:42.631488 | orchestrator | 08:05:42.631 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-19 08:05:42.631524 | orchestrator | 08:05:42.631 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-02-19 08:05:42.631549 | orchestrator | 08:05:42.631 STDOUT terraform:  + force_delete = false 2025-02-19 08:05:42.631589 | orchestrator | 08:05:42.631 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.631631 | orchestrator | 08:05:42.631 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.631669 | orchestrator | 08:05:42.631 STDOUT terraform:  + image_name = (known after apply) 2025-02-19 08:05:42.631684 | orchestrator | 08:05:42.631 STDOUT terraform:  + key_pair = "testbed" 2025-02-19 08:05:42.631721 | orchestrator | 08:05:42.631 STDOUT terraform:  + name = "testbed-manager" 2025-02-19 08:05:42.631748 | orchestrator | 08:05:42.631 STDOUT terraform:  + power_state = "active" 2025-02-19 08:05:42.631788 | orchestrator | 08:05:42.631 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.631825 | orchestrator | 08:05:42.631 STDOUT terraform:  + security_groups = (known after apply) 2025-02-19 08:05:42.631851 | orchestrator | 08:05:42.631 STDOUT terraform:  + stop_before_destroy = false 2025-02-19 08:05:42.631889 | orchestrator | 08:05:42.631 STDOUT terraform:  + updated = (known after apply) 2025-02-19 08:05:42.631928 | orchestrator | 08:05:42.631 STDOUT terraform:  + user_data = (known after apply) 2025-02-19 08:05:42.631936 | orchestrator | 08:05:42.631 STDOUT terraform:  + block_device { 2025-02-19 08:05:42.631967 | orchestrator | 08:05:42.631 STDOUT terraform:  + boot_index = 0 2025-02-19 08:05:42.631996 | orchestrator | 08:05:42.631 STDOUT terraform:  + delete_on_termination = false 2025-02-19 08:05:42.632046 | orchestrator | 08:05:42.631 STDOUT terraform:  + destination_type = "volume" 2025-02-19 08:05:42.632060 | orchestrator | 08:05:42.632 STDOUT terraform:  + multiattach = false 2025-02-19 08:05:42.632098 | orchestrator | 08:05:42.632 STDOUT terraform:  + source_type = "volume" 2025-02-19 08:05:42.632139 | orchestrator | 08:05:42.632 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.632151 | orchestrator | 08:05:42.632 STDOUT terraform:  } 2025-02-19 08:05:42.632159 | orchestrator | 08:05:42.632 STDOUT terraform:  + network { 2025-02-19 08:05:42.632185 | orchestrator | 08:05:42.632 STDOUT terraform:  + access_network = false 2025-02-19 08:05:42.632217 | orchestrator | 08:05:42.632 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-19 08:05:42.632249 | orchestrator | 08:05:42.632 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-19 08:05:42.632283 | orchestrator | 08:05:42.632 STDOUT terraform:  + mac = (known after apply) 2025-02-19 08:05:42.632316 | orchestrator | 08:05:42.632 STDOUT terraform:  + name = (known after apply) 2025-02-19 08:05:42.632349 | orchestrator | 08:05:42.632 STDOUT terraform:  + port = (known after apply) 2025-02-19 08:05:42.632383 | orchestrator | 08:05:42.632 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.632390 | orchestrator | 08:05:42.632 STDOUT terraform:  } 2025-02-19 08:05:42.632397 | orchestrator | 08:05:42.632 STDOUT terraform:  } 2025-02-19 08:05:42.632452 | orchestrator | 08:05:42.632 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-02-19 08:05:42.632495 | orchestrator | 08:05:42.632 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-19 08:05:42.632532 | orchestrator | 08:05:42.632 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-19 08:05:42.632570 | orchestrator | 08:05:42.632 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-19 08:05:42.632607 | orchestrator | 08:05:42.632 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-19 08:05:42.632653 | orchestrator | 08:05:42.632 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.632666 | orchestrator | 08:05:42.632 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.632695 | orchestrator | 08:05:42.632 STDOUT terraform:  + config_drive = true 2025-02-19 08:05:42.632730 | orchestrator | 08:05:42.632 STDOUT terraform:  + created = (known after apply) 2025-02-19 08:05:42.632767 | orchestrator | 08:05:42.632 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-19 08:05:42.632798 | orchestrator | 08:05:42.632 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-19 08:05:42.632824 | orchestrator | 08:05:42.632 STDOUT terraform:  + force_delete = false 2025-02-19 08:05:42.632862 | orchestrator | 08:05:42.632 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.632900 | orchestrator | 08:05:42.632 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.632937 | orchestrator | 08:05:42.632 STDOUT terraform:  + image_name = (known after apply) 2025-02-19 08:05:42.632964 | orchestrator | 08:05:42.632 STDOUT terraform:  + key_pair = "testbed" 2025-02-19 08:05:42.632997 | orchestrator | 08:05:42.632 STDOUT terraform:  + name = "testbed-node-0" 2025-02-19 08:05:42.633030 | orchestrator | 08:05:42.632 STDOUT terraform:  + power_state = "active" 2025-02-19 08:05:42.633180 | orchestrator | 08:05:42.633 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.633232 | orchestrator | 08:05:42.633 STDOUT terraform:  + security_groups = (known after apply) 2025-02-19 08:05:42.633246 | orchestrator | 08:05:42.633 STDOUT terraform:  + stop_before_destroy = false 2025-02-19 08:05:42.633257 | orchestrator | 08:05:42.633 STDOUT terraform:  + updated = (known after apply) 2025-02-19 08:05:42.633271 | orchestrator | 08:05:42.633 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-19 08:05:42.633297 | orchestrator | 08:05:42.633 STDOUT terraform:  + block_device { 2025-02-19 08:05:42.633309 | orchestrator | 08:05:42.633 STDOUT terraform:  + boot_index = 0 2025-02-19 08:05:42.633330 | orchestrator | 08:05:42.633 STDOUT terraform:  + delete_on_termination = false 2025-02-19 08:05:42.633345 | orchestrator | 08:05:42.633 STDOUT terraform:  + destination_type = "volume" 2025-02-19 08:05:42.633399 | orchestrator | 08:05:42.633 STDOUT terraform:  + multiattach = false 2025-02-19 08:05:42.633412 | orchestrator | 08:05:42.633 STDOUT terraform:  + source_type = "volume" 2025-02-19 08:05:42.633426 | orchestrator | 08:05:42.633 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.633473 | orchestrator | 08:05:42.633 STDOUT terraform:  } 2025-02-19 08:05:42.633486 | orchestrator | 08:05:42.633 STDOUT terraform:  + network { 2025-02-19 08:05:42.633498 | orchestrator | 08:05:42.633 STDOUT terraform:  + access_network = false 2025-02-19 08:05:42.633513 | orchestrator | 08:05:42.633 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-19 08:05:42.633525 | orchestrator | 08:05:42.633 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-19 08:05:42.633540 | orchestrator | 08:05:42.633 STDOUT terraform:  + mac = (known after apply) 2025-02-19 08:05:42.633554 | orchestrator | 08:05:42.633 STDOUT terraform:  + name = (known after apply) 2025-02-19 08:05:42.633594 | orchestrator | 08:05:42.633 STDOUT terraform:  + port = (known after apply) 2025-02-19 08:05:42.633610 | orchestrator | 08:05:42.633 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.633625 | orchestrator | 08:05:42.633 STDOUT terraform:  } 2025-02-19 08:05:42.633639 | orchestrator | 08:05:42.633 STDOUT terraform:  } 2025-02-19 08:05:42.633690 | orchestrator | 08:05:42.633 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-02-19 08:05:42.633756 | orchestrator | 08:05:42.633 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-19 08:05:42.633804 | orchestrator | 08:05:42.633 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-19 08:05:42.633829 | orchestrator | 08:05:42.633 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-19 08:05:42.633862 | orchestrator | 08:05:42.633 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-19 08:05:42.633903 | orchestrator | 08:05:42.633 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.633970 | orchestrator | 08:05:42.633 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.633994 | orchestrator | 08:05:42.633 STDOUT terraform:  + config_drive = true 2025-02-19 08:05:42.634084 | orchestrator | 08:05:42.633 STDOUT terraform:  + created = (known after apply) 2025-02-19 08:05:42.634108 | orchestrator | 08:05:42.633 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-19 08:05:42.634143 | orchestrator | 08:05:42.633 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-19 08:05:42.634165 | orchestrator | 08:05:42.633 STDOUT terraform:  + force_delete = false 2025-02-19 08:05:42.634191 | orchestrator | 08:05:42.634 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.634213 | orchestrator | 08:05:42.634 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.634234 | orchestrator | 08:05:42.634 STDOUT terraform:  + image_name = (known after apply) 2025-02-19 08:05:42.634254 | orchestrator | 08:05:42.634 STDOUT terraform:  + key_pair = "testbed" 2025-02-19 08:05:42.634278 | orchestrator | 08:05:42.634 STDOUT terraform:  + name = "testbed-node-1" 2025-02-19 08:05:42.634297 | orchestrator | 08:05:42.634 STDOUT terraform:  + power_state = "active" 2025-02-19 08:05:42.634317 | orchestrator | 08:05:42.634 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.634342 | orchestrator | 08:05:42.634 STDOUT terraform:  + security_groups = (known after apply) 2025-02-19 08:05:42.634364 | orchestrator | 08:05:42.634 STDOUT terraform:  + stop_before_destroy = false 2025-02-19 08:05:42.634388 | orchestrator | 08:05:42.634 STDOUT terraform:  + updated = (known after apply) 2025-02-19 08:05:42.634411 | orchestrator | 08:05:42.634 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-19 08:05:42.634437 | orchestrator | 08:05:42.634 STDOUT terraform:  + block_device { 2025-02-19 08:05:42.634475 | orchestrator | 08:05:42.634 STDOUT terraform:  + boot_index = 0 2025-02-19 08:05:42.634502 | orchestrator | 08:05:42.634 STDOUT terraform:  + delete_on_termination = false 2025-02-19 08:05:42.634523 | orchestrator | 08:05:42.634 STDOUT terraform:  + destination_type = "volume" 2025-02-19 08:05:42.634548 | orchestrator | 08:05:42.634 STDOUT terraform:  + multiattach = false 2025-02-19 08:05:42.634584 | orchestrator | 08:05:42.634 STDOUT terraform:  + source_type = "volume" 2025-02-19 08:05:42.634612 | orchestrator | 08:05:42.634 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.634664 | orchestrator | 08:05:42.634 STDOUT terraform:  } 2025-02-19 08:05:42.634686 | orchestrator | 08:05:42.634 STDOUT terraform:  + network { 2025-02-19 08:05:42.634706 | orchestrator | 08:05:42.634 STDOUT terraform:  + access_network = false 2025-02-19 08:05:42.634730 | orchestrator | 08:05:42.634 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-19 08:05:42.634749 | orchestrator | 08:05:42.634 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-19 08:05:42.634781 | orchestrator | 08:05:42.634 STDOUT terraform:  + mac = (known after apply) 2025-02-19 08:05:42.634800 | orchestrator | 08:05:42.634 STDOUT terraform:  + name = (known after apply) 2025-02-19 08:05:42.634823 | orchestrator | 08:05:42.634 STDOUT terraform:  + port = (known after apply) 2025-02-19 08:05:42.634840 | orchestrator | 08:05:42.634 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.634851 | orchestrator | 08:05:42.634 STDOUT terraform:  } 2025-02-19 08:05:42.634863 | orchestrator | 08:05:42.634 STDOUT terraform:  } 2025-02-19 08:05:42.634878 | orchestrator | 08:05:42.634 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-02-19 08:05:42.634916 | orchestrator | 08:05:42.634 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-19 08:05:42.634932 | orchestrator | 08:05:42.634 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-19 08:05:42.634980 | orchestrator | 08:05:42.634 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-19 08:05:42.635039 | orchestrator | 08:05:42.634 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-19 08:05:42.635058 | orchestrator | 08:05:42.634 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.635095 | orchestrator | 08:05:42.634 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.635116 | orchestrator | 08:05:42.635 STDOUT terraform:  + config_drive = true 2025-02-19 08:05:42.635140 | orchestrator | 08:05:42.635 STDOUT terraform:  + created = (known after apply) 2025-02-19 08:05:42.635159 | orchestrator | 08:05:42.635 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-19 08:05:42.635177 | orchestrator | 08:05:42.635 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-19 08:05:42.635202 | orchestrator | 08:05:42.635 STDOUT terraform:  + force_delete = false 2025-02-19 08:05:42.635222 | orchestrator | 08:05:42.635 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.635240 | orchestrator | 08:05:42.635 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.635277 | orchestrator | 08:05:42.635 STDOUT terraform:  + image_name = (known after apply) 2025-02-19 08:05:42.635293 | orchestrator | 08:05:42.635 STDOUT terraform:  + key_pair = "testbed" 2025-02-19 08:05:42.635334 | orchestrator | 08:05:42.635 STDOUT terraform:  + name = "testbed-node-2" 2025-02-19 08:05:42.635349 | orchestrator | 08:05:42.635 STDOUT terraform:  + power_state = "active" 2025-02-19 08:05:42.635381 | orchestrator | 08:05:42.635 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.635396 | orchestrator | 08:05:42.635 STDOUT terraform:  + security_groups = (known after apply) 2025-02-19 08:05:42.635438 | orchestrator | 08:05:42.635 STDOUT terraform:  + stop_before_destroy = false 2025-02-19 08:05:42.635454 | orchestrator | 08:05:42.635 STDOUT terraform:  + updated = (known after apply) 2025-02-19 08:05:42.635489 | orchestrator | 08:05:42.635 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-19 08:05:42.635538 | orchestrator | 08:05:42.635 STDOUT terraform:  + block_device { 2025-02-19 08:05:42.635581 | orchestrator | 08:05:42.635 STDOUT terraform:  + boot_index = 0 2025-02-19 08:05:42.635618 | orchestrator | 08:05:42.635 STDOUT terraform:  + delete_on_termination = false 2025-02-19 08:05:42.635639 | orchestrator | 08:05:42.635 STDOUT terraform:  + destination_type = "volume" 2025-02-19 08:05:42.635661 | orchestrator | 08:05:42.635 STDOUT terraform:  + multiattach = false 2025-02-19 08:05:42.635677 | orchestrator | 08:05:42.635 STDOUT terraform:  + source_type = "volume" 2025-02-19 08:05:42.635688 | orchestrator | 08:05:42.635 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.635700 | orchestrator | 08:05:42.635 STDOUT terraform:  } 2025-02-19 08:05:42.635712 | orchestrator | 08:05:42.635 STDOUT terraform:  + network { 2025-02-19 08:05:42.635726 | orchestrator | 08:05:42.635 STDOUT terraform:  + access_network = false 2025-02-19 08:05:42.635738 | orchestrator | 08:05:42.635 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-19 08:05:42.635752 | orchestrator | 08:05:42.635 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-19 08:05:42.635766 | orchestrator | 08:05:42.635 STDOUT terraform:  + mac = (known after apply) 2025-02-19 08:05:42.635804 | orchestrator | 08:05:42.635 STDOUT terraform:  + name = (known after apply) 2025-02-19 08:05:42.635820 | orchestrator | 08:05:42.635 STDOUT terraform:  + port = (known after apply) 2025-02-19 08:05:42.635854 | orchestrator | 08:05:42.635 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.635868 | orchestrator | 08:05:42.635 STDOUT terraform:  } 2025-02-19 08:05:42.635882 | orchestrator | 08:05:42.635 STDOUT terraform:  } 2025-02-19 08:05:42.635927 | orchestrator | 08:05:42.635 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-02-19 08:05:42.635943 | orchestrator | 08:05:42.635 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-19 08:05:42.635988 | orchestrator | 08:05:42.635 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-19 08:05:42.636038 | orchestrator | 08:05:42.635 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-19 08:05:42.636054 | orchestrator | 08:05:42.636 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-19 08:05:42.636099 | orchestrator | 08:05:42.636 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.636114 | orchestrator | 08:05:42.636 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.636129 | orchestrator | 08:05:42.636 STDOUT terraform:  + config_drive = true 2025-02-19 08:05:42.636164 | orchestrator | 08:05:42.636 STDOUT terraform:  + created = (known after apply) 2025-02-19 08:05:42.636199 | orchestrator | 08:05:42.636 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-19 08:05:42.636225 | orchestrator | 08:05:42.636 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-19 08:05:42.636241 | orchestrator | 08:05:42.636 STDOUT terraform:  + force_delete = false 2025-02-19 08:05:42.636277 | orchestrator | 08:05:42.636 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.636305 | orchestrator | 08:05:42.636 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.636352 | orchestrator | 08:05:42.636 STDOUT terraform:  + image_name = (known after apply) 2025-02-19 08:05:42.636385 | orchestrator | 08:05:42.636 STDOUT terraform:  + key_pair = "testbed" 2025-02-19 08:05:42.636400 | orchestrator | 08:05:42.636 STDOUT terraform:  + name = "testbed-node-3" 2025-02-19 08:05:42.636440 | orchestrator | 08:05:42.636 STDOUT terraform:  + power_state = "active" 2025-02-19 08:05:42.636456 | orchestrator | 08:05:42.636 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.636470 | orchestrator | 08:05:42.636 STDOUT terraform:  + security_groups = (known after apply) 2025-02-19 08:05:42.636485 | orchestrator | 08:05:42.636 STDOUT terraform:  + stop_before_destroy = false 2025-02-19 08:05:42.636529 | orchestrator | 08:05:42.636 STDOUT terraform:  + updated = (known after apply) 2025-02-19 08:05:42.636566 | orchestrator | 08:05:42.636 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-19 08:05:42.636598 | orchestrator | 08:05:42.636 STDOUT terraform:  + block_device { 2025-02-19 08:05:42.636613 | orchestrator | 08:05:42.636 STDOUT terraform:  + boot_index = 0 2025-02-19 08:05:42.636653 | orchestrator | 08:05:42.636 STDOUT terraform:  + delete_on_termination = false 2025-02-19 08:05:42.636678 | orchestrator | 08:05:42.636 STDOUT terraform:  + destination_type = "volume" 2025-02-19 08:05:42.636697 | orchestrator | 08:05:42.636 STDOUT terraform:  + multiattach = false 2025-02-19 08:05:42.636721 | orchestrator | 08:05:42.636 STDOUT terraform:  + source_type = "volume" 2025-02-19 08:05:42.636742 | orchestrator | 08:05:42.636 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.636758 | orchestrator | 08:05:42.636 STDOUT terraform:  } 2025-02-19 08:05:42.636793 | orchestrator | 08:05:42.636 STDOUT terraform:  + network { 2025-02-19 08:05:42.636807 | orchestrator | 08:05:42.636 STDOUT terraform:  + access_network = false 2025-02-19 08:05:42.636821 | orchestrator | 08:05:42.636 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-19 08:05:42.636833 | orchestrator | 08:05:42.636 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-19 08:05:42.636847 | orchestrator | 08:05:42.636 STDOUT terraform:  + mac = (known after apply) 2025-02-19 08:05:42.636861 | orchestrator | 08:05:42.636 STDOUT terraform:  + name = (known after apply) 2025-02-19 08:05:42.636897 | orchestrator | 08:05:42.636 STDOUT terraform:  + port = (known after apply) 2025-02-19 08:05:42.636912 | orchestrator | 08:05:42.636 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.636926 | orchestrator | 08:05:42.636 STDOUT terraform:  } 2025-02-19 08:05:42.636940 | orchestrator | 08:05:42.636 STDOUT terraform:  } 2025-02-19 08:05:42.637097 | orchestrator | 08:05:42.636 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-02-19 08:05:42.637117 | orchestrator | 08:05:42.637 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-19 08:05:42.637131 | orchestrator | 08:05:42.637 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-19 08:05:42.637187 | orchestrator | 08:05:42.637 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-19 08:05:42.637203 | orchestrator | 08:05:42.637 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-19 08:05:42.637230 | orchestrator | 08:05:42.637 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.637244 | orchestrator | 08:05:42.637 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.637258 | orchestrator | 08:05:42.637 STDOUT terraform:  + config_drive = true 2025-02-19 08:05:42.637307 | orchestrator | 08:05:42.637 STDOUT terraform:  + created = (known after apply) 2025-02-19 08:05:42.637323 | orchestrator | 08:05:42.637 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-19 08:05:42.637358 | orchestrator | 08:05:42.637 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-19 08:05:42.637375 | orchestrator | 08:05:42.637 STDOUT terraform:  + force_delete = false 2025-02-19 08:05:42.637408 | orchestrator | 08:05:42.637 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.637446 | orchestrator | 08:05:42.637 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.637461 | orchestrator | 08:05:42.637 STDOUT terraform:  + image_name = (known after apply) 2025-02-19 08:05:42.637495 | orchestrator | 08:05:42.637 STDOUT terraform:  + key_pair = "testbed" 2025-02-19 08:05:42.637510 | orchestrator | 08:05:42.637 STDOUT terraform:  + name = "testbed-node-4" 2025-02-19 08:05:42.637545 | orchestrator | 08:05:42.637 STDOUT terraform:  + power_state = "active" 2025-02-19 08:05:42.637560 | orchestrator | 08:05:42.637 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.637608 | orchestrator | 08:05:42.637 STDOUT terraform:  + security_groups = (known after apply) 2025-02-19 08:05:42.637623 | orchestrator | 08:05:42.637 STDOUT terraform:  + stop_before_destroy = false 2025-02-19 08:05:42.637657 | orchestrator | 08:05:42.637 STDOUT terraform:  + updated = (known after apply) 2025-02-19 08:05:42.637705 | orchestrator | 08:05:42.637 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-19 08:05:42.637761 | orchestrator | 08:05:42.637 STDOUT terraform:  + block_device { 2025-02-19 08:05:42.637787 | orchestrator | 08:05:42.637 STDOUT terraform:  + boot_index = 0 2025-02-19 08:05:42.637807 | orchestrator | 08:05:42.637 STDOUT terraform:  + delete_on_termination = false 2025-02-19 08:05:42.637827 | orchestrator | 08:05:42.637 STDOUT terraform:  + destination_type = "volume" 2025-02-19 08:05:42.637850 | orchestrator | 08:05:42.637 STDOUT terraform:  + multiattach = false 2025-02-19 08:05:42.637869 | orchestrator | 08:05:42.637 STDOUT terraform:  + source_type = "volume" 2025-02-19 08:05:42.637893 | orchestrator | 08:05:42.637 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.637944 | orchestrator | 08:05:42.637 STDOUT terraform:  } 2025-02-19 08:05:42.637967 | orchestrator | 08:05:42.637 STDOUT terraform:  + network { 2025-02-19 08:05:42.637988 | orchestrator | 08:05:42.637 STDOUT terraform:  + access_network = false 2025-02-19 08:05:42.638100 | orchestrator | 08:05:42.637 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-19 08:05:42.638129 | orchestrator | 08:05:42.637 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-19 08:05:42.638150 | orchestrator | 08:05:42.637 STDOUT terraform:  + mac = (known after apply) 2025-02-19 08:05:42.638188 | orchestrator | 08:05:42.637 STDOUT terraform:  + name = (known after apply) 2025-02-19 08:05:42.638201 | orchestrator | 08:05:42.637 STDOUT terraform:  + port = (known after apply) 2025-02-19 08:05:42.638213 | orchestrator | 08:05:42.638 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.638225 | orchestrator | 08:05:42.638 STDOUT terraform:  } 2025-02-19 08:05:42.638240 | orchestrator | 08:05:42.638 STDOUT terraform:  } 2025-02-19 08:05:42.638279 | orchestrator | 08:05:42.638 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-02-19 08:05:42.638292 | orchestrator | 08:05:42.638 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-19 08:05:42.638304 | orchestrator | 08:05:42.638 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-19 08:05:42.638316 | orchestrator | 08:05:42.638 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-19 08:05:42.638330 | orchestrator | 08:05:42.638 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-19 08:05:42.638460 | orchestrator | 08:05:42.638 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.638509 | orchestrator | 08:05:42.638 STDOUT terraform:  + availability_zone = "nova" 2025-02-19 08:05:42.638516 | orchestrator | 08:05:42.638 STDOUT terraform:  + config_drive = true 2025-02-19 08:05:42.638528 | orchestrator | 08:05:42.638 STDOUT terraform:  + created = (known after apply) 2025-02-19 08:05:42.638533 | orchestrator | 08:05:42.638 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-19 08:05:42.638545 | orchestrator | 08:05:42.638 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-19 08:05:42.638550 | orchestrator | 08:05:42.638 STDOUT terraform:  + force_delete = false 2025-02-19 08:05:42.638556 | orchestrator | 08:05:42.638 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.638563 | orchestrator | 08:05:42.638 STDOUT terraform:  + image_id = (known after apply) 2025-02-19 08:05:42.638588 | orchestrator | 08:05:42.638 STDOUT terraform:  + image_name = (known after apply) 2025-02-19 08:05:42.638596 | orchestrator | 08:05:42.638 STDOUT terraform:  + key_pair = "testbed" 2025-02-19 08:05:42.638619 | orchestrator | 08:05:42.638 STDOUT terraform:  + name = "testbed-node-5" 2025-02-19 08:05:42.638644 | orchestrator | 08:05:42.638 STDOUT terraform:  + power_state = "active" 2025-02-19 08:05:42.638678 | orchestrator | 08:05:42.638 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.638712 | orchestrator | 08:05:42.638 STDOUT terraform:  + security_groups = (known after apply) 2025-02-19 08:05:42.638738 | orchestrator | 08:05:42.638 STDOUT terraform:  + stop_before_destroy = false 2025-02-19 08:05:42.638765 | orchestrator | 08:05:42.638 STDOUT terraform:  + updated = (known after apply) 2025-02-19 08:05:42.638812 | orchestrator | 08:05:42.638 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-19 08:05:42.638821 | orchestrator | 08:05:42.638 STDOUT terraform:  + block_device { 2025-02-19 08:05:42.638847 | orchestrator | 08:05:42.638 STDOUT terraform:  + boot_index = 0 2025-02-19 08:05:42.638874 | orchestrator | 08:05:42.638 STDOUT terraform:  + delete_on_termination = false 2025-02-19 08:05:42.638914 | orchestrator | 08:05:42.638 STDOUT terraform:  + destination_type = "volume" 2025-02-19 08:05:42.638943 | orchestrator | 08:05:42.638 STDOUT terraform:  + multiattach = false 2025-02-19 08:05:42.638987 | orchestrator | 08:05:42.638 STDOUT terraform:  + source_type = "volume" 2025-02-19 08:05:42.639022 | orchestrator | 08:05:42.638 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.639034 | orchestrator | 08:05:42.639 STDOUT terraform:  } 2025-02-19 08:05:42.639041 | orchestrator | 08:05:42.639 STDOUT terraform:  + network { 2025-02-19 08:05:42.639062 | orchestrator | 08:05:42.639 STDOUT terraform:  + access_network = false 2025-02-19 08:05:42.639092 | orchestrator | 08:05:42.639 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-19 08:05:42.639121 | orchestrator | 08:05:42.639 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-19 08:05:42.639152 | orchestrator | 08:05:42.639 STDOUT terraform:  + mac = (known after apply) 2025-02-19 08:05:42.639182 | orchestrator | 08:05:42.639 STDOUT terraform:  + name = (known after apply) 2025-02-19 08:05:42.639213 | orchestrator | 08:05:42.639 STDOUT terraform:  + port = (known after apply) 2025-02-19 08:05:42.639244 | orchestrator | 08:05:42.639 STDOUT terraform:  + uuid = (known after apply) 2025-02-19 08:05:42.639251 | orchestrator | 08:05:42.639 STDOUT terraform:  } 2025-02-19 08:05:42.639269 | orchestrator | 08:05:42.639 STDOUT terraform:  } 2025-02-19 08:05:42.639308 | orchestrator | 08:05:42.639 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-02-19 08:05:42.639336 | orchestrator | 08:05:42.639 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-02-19 08:05:42.639363 | orchestrator | 08:05:42.639 STDOUT terraform:  + fingerprint = (known after apply) 2025-02-19 08:05:42.639393 | orchestrator | 08:05:42.639 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.639413 | orchestrator | 08:05:42.639 STDOUT terraform:  + name = "testbed" 2025-02-19 08:05:42.639437 | orchestrator | 08:05:42.639 STDOUT terraform:  + private_key = (sensitive value) 2025-02-19 08:05:42.639466 | orchestrator | 08:05:42.639 STDOUT terraform:  + public_key = (known after apply) 2025-02-19 08:05:42.639495 | orchestrator | 08:05:42.639 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.639522 | orchestrator | 08:05:42.639 STDOUT terraform:  + user_id = (known after apply) 2025-02-19 08:05:42.639529 | orchestrator | 08:05:42.639 STDOUT terraform:  } 2025-02-19 08:05:42.639580 | orchestrator | 08:05:42.639 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-02-19 08:05:42.639628 | orchestrator | 08:05:42.639 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.639662 | orchestrator | 08:05:42.639 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.639683 | orchestrator | 08:05:42.639 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.639713 | orchestrator | 08:05:42.639 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.639741 | orchestrator | 08:05:42.639 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.639773 | orchestrator | 08:05:42.639 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.639784 | orchestrator | 08:05:42.639 STDOUT terraform:  } 2025-02-19 08:05:42.639831 | orchestrator | 08:05:42.639 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-02-19 08:05:42.639876 | orchestrator | 08:05:42.639 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.639905 | orchestrator | 08:05:42.639 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.639933 | orchestrator | 08:05:42.639 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.639961 | orchestrator | 08:05:42.639 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.639989 | orchestrator | 08:05:42.639 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.640056 | orchestrator | 08:05:42.639 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.640068 | orchestrator | 08:05:42.640 STDOUT terraform:  } 2025-02-19 08:05:42.640123 | orchestrator | 08:05:42.640 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-02-19 08:05:42.640194 | orchestrator | 08:05:42.640 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.640228 | orchestrator | 08:05:42.640 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.640235 | orchestrator | 08:05:42.640 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.640253 | orchestrator | 08:05:42.640 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.640280 | orchestrator | 08:05:42.640 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.640308 | orchestrator | 08:05:42.640 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.640315 | orchestrator | 08:05:42.640 STDOUT terraform:  } 2025-02-19 08:05:42.640366 | orchestrator | 08:05:42.640 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-02-19 08:05:42.640413 | orchestrator | 08:05:42.640 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.640441 | orchestrator | 08:05:42.640 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.640469 | orchestrator | 08:05:42.640 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.640498 | orchestrator | 08:05:42.640 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.640525 | orchestrator | 08:05:42.640 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.640555 | orchestrator | 08:05:42.640 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.640562 | orchestrator | 08:05:42.640 STDOUT terraform:  } 2025-02-19 08:05:42.640614 | orchestrator | 08:05:42.640 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-02-19 08:05:42.640660 | orchestrator | 08:05:42.640 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.640687 | orchestrator | 08:05:42.640 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.640714 | orchestrator | 08:05:42.640 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.640742 | orchestrator | 08:05:42.640 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.640771 | orchestrator | 08:05:42.640 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.640796 | orchestrator | 08:05:42.640 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.640850 | orchestrator | 08:05:42.640 STDOUT terraform:  } 2025-02-19 08:05:42.640857 | orchestrator | 08:05:42.640 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-02-19 08:05:42.640899 | orchestrator | 08:05:42.640 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.640926 | orchestrator | 08:05:42.640 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.640954 | orchestrator | 08:05:42.640 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.640982 | orchestrator | 08:05:42.640 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.641040 | orchestrator | 08:05:42.640 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.641048 | orchestrator | 08:05:42.641 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.641054 | orchestrator | 08:05:42.641 STDOUT terraform:  } 2025-02-19 08:05:42.641100 | orchestrator | 08:05:42.641 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-02-19 08:05:42.641148 | orchestrator | 08:05:42.641 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.641175 | orchestrator | 08:05:42.641 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.641203 | orchestrator | 08:05:42.641 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.641230 | orchestrator | 08:05:42.641 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.641258 | orchestrator | 08:05:42.641 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.641285 | orchestrator | 08:05:42.641 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.641292 | orchestrator | 08:05:42.641 STDOUT terraform:  } 2025-02-19 08:05:42.641360 | orchestrator | 08:05:42.641 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-02-19 08:05:42.641408 | orchestrator | 08:05:42.641 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.641436 | orchestrator | 08:05:42.641 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.641466 | orchestrator | 08:05:42.641 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.641495 | orchestrator | 08:05:42.641 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.641526 | orchestrator | 08:05:42.641 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.641552 | orchestrator | 08:05:42.641 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.641559 | orchestrator | 08:05:42.641 STDOUT terraform:  } 2025-02-19 08:05:42.641609 | orchestrator | 08:05:42.641 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-02-19 08:05:42.641657 | orchestrator | 08:05:42.641 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.641684 | orchestrator | 08:05:42.641 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.641712 | orchestrator | 08:05:42.641 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.641739 | orchestrator | 08:05:42.641 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.641767 | orchestrator | 08:05:42.641 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.641799 | orchestrator | 08:05:42.641 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.641856 | orchestrator | 08:05:42.641 STDOUT terraform:  } 2025-02-19 08:05:42.641864 | orchestrator | 08:05:42.641 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-02-19 08:05:42.641903 | orchestrator | 08:05:42.641 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.641931 | orchestrator | 08:05:42.641 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.647416 | orchestrator | 08:05:42.641 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.647563 | orchestrator | 08:05:42.641 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.647573 | orchestrator | 08:05:42.641 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.647578 | orchestrator | 08:05:42.641 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.647584 | orchestrator | 08:05:42.642 STDOUT terraform:  } 2025-02-19 08:05:42.647590 | orchestrator | 08:05:42.642 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-02-19 08:05:42.647596 | orchestrator | 08:05:42.642 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.647601 | orchestrator | 08:05:42.642 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.647607 | orchestrator | 08:05:42.642 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.647612 | orchestrator | 08:05:42.642 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.647618 | orchestrator | 08:05:42.642 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.647623 | orchestrator | 08:05:42.642 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.647629 | orchestrator | 08:05:42.642 STDOUT terraform:  } 2025-02-19 08:05:42.647634 | orchestrator | 08:05:42.642 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-02-19 08:05:42.647651 | orchestrator | 08:05:42.643 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.647657 | orchestrator | 08:05:42.643 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.647663 | orchestrator | 08:05:42.643 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.647668 | orchestrator | 08:05:42.643 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.647673 | orchestrator | 08:05:42.643 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.647679 | orchestrator | 08:05:42.643 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.647685 | orchestrator | 08:05:42.643 STDOUT terraform:  } 2025-02-19 08:05:42.647690 | orchestrator | 08:05:42.643 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-02-19 08:05:42.647696 | orchestrator | 08:05:42.643 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.647701 | orchestrator | 08:05:42.643 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.647707 | orchestrator | 08:05:42.643 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.647712 | orchestrator | 08:05:42.643 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.647718 | orchestrator | 08:05:42.643 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.647723 | orchestrator | 08:05:42.643 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.647729 | orchestrator | 08:05:42.643 STDOUT terraform:  } 2025-02-19 08:05:42.647734 | orchestrator | 08:05:42.643 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-02-19 08:05:42.647740 | orchestrator | 08:05:42.643 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.647745 | orchestrator | 08:05:42.643 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.647758 | orchestrator | 08:05:42.643 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.647764 | orchestrator | 08:05:42.643 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.647769 | orchestrator | 08:05:42.643 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.647775 | orchestrator | 08:05:42.643 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.647781 | orchestrator | 08:05:42.643 STDOUT terraform:  } 2025-02-19 08:05:42.647792 | orchestrator | 08:05:42.643 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-02-19 08:05:42.647798 | orchestrator | 08:05:42.643 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.647803 | orchestrator | 08:05:42.643 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.647809 | orchestrator | 08:05:42.643 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.647814 | orchestrator | 08:05:42.643 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.647820 | orchestrator | 08:05:42.643 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.647829 | orchestrator | 08:05:42.643 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.647837 | orchestrator | 08:05:42.643 STDOUT terraform:  } 2025-02-19 08:05:42.647843 | orchestrator | 08:05:42.643 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-02-19 08:05:42.647848 | orchestrator | 08:05:42.643 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.647853 | orchestrator | 08:05:42.643 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.647859 | orchestrator | 08:05:42.643 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.647864 | orchestrator | 08:05:42.643 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.647870 | orchestrator | 08:05:42.644 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.647875 | orchestrator | 08:05:42.644 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.647880 | orchestrator | 08:05:42.644 STDOUT terraform:  } 2025-02-19 08:05:42.647886 | orchestrator | 08:05:42.644 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-02-19 08:05:42.647891 | orchestrator | 08:05:42.644 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.647897 | orchestrator | 08:05:42.644 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.647902 | orchestrator | 08:05:42.644 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.647908 | orchestrator | 08:05:42.644 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.647913 | orchestrator | 08:05:42.644 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.647918 | orchestrator | 08:05:42.644 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.647924 | orchestrator | 08:05:42.644 STDOUT terraform:  } 2025-02-19 08:05:42.647929 | orchestrator | 08:05:42.644 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-02-19 08:05:42.647935 | orchestrator | 08:05:42.644 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-19 08:05:42.647940 | orchestrator | 08:05:42.644 STDOUT terraform:  + device = (known after apply) 2025-02-19 08:05:42.647945 | orchestrator | 08:05:42.644 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.647951 | orchestrator | 08:05:42.644 STDOUT terraform:  + instance_id = (known after apply) 2025-02-19 08:05:42.647956 | orchestrator | 08:05:42.644 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.647962 | orchestrator | 08:05:42.644 STDOUT terraform:  + volume_id = (known after apply) 2025-02-19 08:05:42.647967 | orchestrator | 08:05:42.644 STDOUT terraform:  } 2025-02-19 08:05:42.647980 | orchestrator | 08:05:42.644 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-02-19 08:05:42.647987 | orchestrator | 08:05:42.644 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-02-19 08:05:42.647992 | orchestrator | 08:05:42.644 STDOUT terraform:  + fixed_ip = (known after apply) 2025-02-19 08:05:42.648023 | orchestrator | 08:05:42.644 STDOUT terraform:  + floating_ip = (known after apply) 2025-02-19 08:05:42.648035 | orchestrator | 08:05:42.644 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.648041 | orchestrator | 08:05:42.644 STDOUT terraform:  + port_id = (known after apply) 2025-02-19 08:05:42.648049 | orchestrator | 08:05:42.644 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.648055 | orchestrator | 08:05:42.644 STDOUT terraform:  } 2025-02-19 08:05:42.648060 | orchestrator | 08:05:42.644 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-02-19 08:05:42.648066 | orchestrator | 08:05:42.644 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-02-19 08:05:42.648072 | orchestrator | 08:05:42.644 STDOUT terraform:  + address = (known after apply) 2025-02-19 08:05:42.648078 | orchestrator | 08:05:42.644 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.648083 | orchestrator | 08:05:42.644 STDOUT terraform:  + dns_domain = (known after apply) 2025-02-19 08:05:42.648088 | orchestrator | 08:05:42.644 STDOUT terraform:  + dns_name = (known after apply) 2025-02-19 08:05:42.648094 | orchestrator | 08:05:42.644 STDOUT terraform:  + fixed_ip = (known after apply) 2025-02-19 08:05:42.648099 | orchestrator | 08:05:42.644 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.648104 | orchestrator | 08:05:42.644 STDOUT terraform:  + pool = "public" 2025-02-19 08:05:42.648110 | orchestrator | 08:05:42.644 STDOUT terraform:  + port_id = (known after apply) 2025-02-19 08:05:42.648116 | orchestrator | 08:05:42.644 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.648121 | orchestrator | 08:05:42.644 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-19 08:05:42.648127 | orchestrator | 08:05:42.645 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.648132 | orchestrator | 08:05:42.645 STDOUT terraform:  } 2025-02-19 08:05:42.648137 | orchestrator | 08:05:42.645 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-02-19 08:05:42.648143 | orchestrator | 08:05:42.645 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-02-19 08:05:42.648148 | orchestrator | 08:05:42.645 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-19 08:05:42.648154 | orchestrator | 08:05:42.645 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.648159 | orchestrator | 08:05:42.645 STDOUT terraform:  + availability_zone_hints = [ 2025-02-19 08:05:42.648165 | orchestrator | 08:05:42.645 STDOUT terraform:  + "nova", 2025-02-19 08:05:42.648170 | orchestrator | 08:05:42.645 STDOUT terraform:  ] 2025-02-19 08:05:42.648175 | orchestrator | 08:05:42.645 STDOUT terraform:  + dns_domain = (known after apply) 2025-02-19 08:05:42.648181 | orchestrator | 08:05:42.645 STDOUT terraform:  + external = (known after apply) 2025-02-19 08:05:42.648186 | orchestrator | 08:05:42.645 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.648191 | orchestrator | 08:05:42.645 STDOUT terraform:  + mtu = (known after apply) 2025-02-19 08:05:42.648201 | orchestrator | 08:05:42.645 STDOUT terraform:  + name = "net-testbed-management" 2025-02-19 08:05:42.648206 | orchestrator | 08:05:42.645 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-19 08:05:42.648212 | orchestrator | 08:05:42.645 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-19 08:05:42.648217 | orchestrator | 08:05:42.645 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.648222 | orchestrator | 08:05:42.645 STDOUT terraform:  + shared = (known after apply) 2025-02-19 08:05:42.648228 | orchestrator | 08:05:42.645 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.648233 | orchestrator | 08:05:42.645 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-02-19 08:05:42.648239 | orchestrator | 08:05:42.645 STDOUT terraform:  + segments (known after apply) 2025-02-19 08:05:42.648244 | orchestrator | 08:05:42.645 STDOUT terraform:  } 2025-02-19 08:05:42.648253 | orchestrator | 08:05:42.645 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-02-19 08:05:42.648258 | orchestrator | 08:05:42.645 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-02-19 08:05:42.648264 | orchestrator | 08:05:42.645 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-19 08:05:42.648270 | orchestrator | 08:05:42.645 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-19 08:05:42.648276 | orchestrator | 08:05:42.645 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-19 08:05:42.648281 | orchestrator | 08:05:42.645 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.648286 | orchestrator | 08:05:42.645 STDOUT terraform:  + device_id = (known after apply) 2025-02-19 08:05:42.648292 | orchestrator | 08:05:42.645 STDOUT terraform:  + device_owner = (known after apply) 2025-02-19 08:05:42.648297 | orchestrator | 08:05:42.645 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-19 08:05:42.648302 | orchestrator | 08:05:42.645 STDOUT terraform:  + dns_name = (known after apply) 2025-02-19 08:05:42.648308 | orchestrator | 08:05:42.645 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.648313 | orchestrator | 08:05:42.645 STDOUT terraform:  + mac_address = (known after apply) 2025-02-19 08:05:42.648321 | orchestrator | 08:05:42.646 STDOUT terraform:  + network_id = (known after apply) 2025-02-19 08:05:42.648326 | orchestrator | 08:05:42.646 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-19 08:05:42.648332 | orchestrator | 08:05:42.646 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-19 08:05:42.648337 | orchestrator | 08:05:42.646 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.648343 | orchestrator | 08:05:42.646 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-19 08:05:42.648348 | orchestrator | 08:05:42.646 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.648353 | orchestrator | 08:05:42.646 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.648362 | orchestrator | 08:05:42.646 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-19 08:05:42.648367 | orchestrator | 08:05:42.646 STDOUT terraform:  } 2025-02-19 08:05:42.648373 | orchestrator | 08:05:42.646 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.648378 | orchestrator | 08:05:42.646 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-19 08:05:42.648383 | orchestrator | 08:05:42.646 STDOUT terraform:  } 2025-02-19 08:05:42.648389 | orchestrator | 08:05:42.646 STDOUT terraform:  + binding (known after apply) 2025-02-19 08:05:42.648394 | orchestrator | 08:05:42.646 STDOUT terraform:  + fixed_ip { 2025-02-19 08:05:42.648399 | orchestrator | 08:05:42.646 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-02-19 08:05:42.648404 | orchestrator | 08:05:42.646 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-19 08:05:42.648410 | orchestrator | 08:05:42.646 STDOUT terraform:  } 2025-02-19 08:05:42.648415 | orchestrator | 08:05:42.646 STDOUT terraform:  } 2025-02-19 08:05:42.648421 | orchestrator | 08:05:42.646 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-02-19 08:05:42.648426 | orchestrator | 08:05:42.646 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-19 08:05:42.648432 | orchestrator | 08:05:42.646 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-19 08:05:42.648437 | orchestrator | 08:05:42.646 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-19 08:05:42.648443 | orchestrator | 08:05:42.646 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-19 08:05:42.648448 | orchestrator | 08:05:42.646 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.648453 | orchestrator | 08:05:42.646 STDOUT terraform:  + device_id = (known after apply) 2025-02-19 08:05:42.648462 | orchestrator | 08:05:42.646 STDOUT terraform:  + device_owner = (known after apply) 2025-02-19 08:05:42.648468 | orchestrator | 08:05:42.646 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-19 08:05:42.648473 | orchestrator | 08:05:42.646 STDOUT terraform:  + dns_name = (known after apply) 2025-02-19 08:05:42.648478 | orchestrator | 08:05:42.646 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.648484 | orchestrator | 08:05:42.646 STDOUT terraform:  + mac_address = (known after apply) 2025-02-19 08:05:42.648489 | orchestrator | 08:05:42.646 STDOUT terraform:  + network_id = (known after apply) 2025-02-19 08:05:42.648494 | orchestrator | 08:05:42.646 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-19 08:05:42.648500 | orchestrator | 08:05:42.646 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-19 08:05:42.648505 | orchestrator | 08:05:42.646 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.648511 | orchestrator | 08:05:42.646 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-19 08:05:42.648516 | orchestrator | 08:05:42.646 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.648521 | orchestrator | 08:05:42.646 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.648530 | orchestrator | 08:05:42.646 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-19 08:05:42.648536 | orchestrator | 08:05:42.647 STDOUT terraform:  } 2025-02-19 08:05:42.648541 | orchestrator | 08:05:42.647 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.648546 | orchestrator | 08:05:42.647 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-19 08:05:42.648552 | orchestrator | 08:05:42.647 STDOUT terraform:  } 2025-02-19 08:05:42.648557 | orchestrator | 08:05:42.647 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.648562 | orchestrator | 08:05:42.647 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-19 08:05:42.648568 | orchestrator | 08:05:42.647 STDOUT terraform:  } 2025-02-19 08:05:42.648573 | orchestrator | 08:05:42.647 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.648578 | orchestrator | 08:05:42.647 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-19 08:05:42.648584 | orchestrator | 08:05:42.647 STDOUT terraform:  } 2025-02-19 08:05:42.648592 | orchestrator | 08:05:42.647 STDOUT terraform:  + binding (known after apply) 2025-02-19 08:05:42.648597 | orchestrator | 08:05:42.647 STDOUT terraform:  + fixed_ip { 2025-02-19 08:05:42.648603 | orchestrator | 08:05:42.647 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-02-19 08:05:42.648608 | orchestrator | 08:05:42.647 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-19 08:05:42.648613 | orchestrator | 08:05:42.647 STDOUT terraform:  } 2025-02-19 08:05:42.648619 | orchestrator | 08:05:42.647 STDOUT terraform:  } 2025-02-19 08:05:42.648624 | orchestrator | 08:05:42.647 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-02-19 08:05:42.648630 | orchestrator | 08:05:42.647 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-19 08:05:42.648635 | orchestrator | 08:05:42.647 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-19 08:05:42.648640 | orchestrator | 08:05:42.647 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-19 08:05:42.648646 | orchestrator | 08:05:42.647 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-19 08:05:42.648656 | orchestrator | 08:05:42.647 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.648662 | orchestrator | 08:05:42.647 STDOUT terraform:  + device_id = (known after apply) 2025-02-19 08:05:42.648667 | orchestrator | 08:05:42.647 STDOUT terraform:  + device_owner = (known after apply) 2025-02-19 08:05:42.648673 | orchestrator | 08:05:42.647 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-19 08:05:42.648678 | orchestrator | 08:05:42.647 STDOUT terraform:  + dns_name = (known after apply) 2025-02-19 08:05:42.648686 | orchestrator | 08:05:42.647 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.648692 | orchestrator | 08:05:42.647 STDOUT terraform:  + mac_address = (known after apply) 2025-02-19 08:05:42.648697 | orchestrator | 08:05:42.647 STDOUT terraform:  + network_id = (known after apply) 2025-02-19 08:05:42.648704 | orchestrator | 08:05:42.647 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-19 08:05:42.648717 | orchestrator | 08:05:42.647 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-19 08:05:42.648726 | orchestrator | 08:05:42.647 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.648735 | orchestrator | 08:05:42.647 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-19 08:05:42.648743 | orchestrator | 08:05:42.647 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.648752 | orchestrator | 08:05:42.647 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.648760 | orchestrator | 08:05:42.647 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-19 08:05:42.648768 | orchestrator | 08:05:42.647 STDOUT terraform:  } 2025-02-19 08:05:42.648777 | orchestrator | 08:05:42.647 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.648785 | orchestrator | 08:05:42.647 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-19 08:05:42.648793 | orchestrator | 08:05:42.647 STDOUT terraform:  } 2025-02-19 08:05:42.648802 | orchestrator | 08:05:42.647 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.648811 | orchestrator | 08:05:42.647 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-19 08:05:42.648820 | orchestrator | 08:05:42.648 STDOUT terraform:  } 2025-02-19 08:05:42.648830 | orchestrator | 08:05:42.648 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.648836 | orchestrator | 08:05:42.648 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-19 08:05:42.648841 | orchestrator | 08:05:42.648 STDOUT terraform:  } 2025-02-19 08:05:42.648847 | orchestrator | 08:05:42.648 STDOUT terraform:  + binding (known after apply) 2025-02-19 08:05:42.648852 | orchestrator | 08:05:42.648 STDOUT terraform:  + fixed_ip { 2025-02-19 08:05:42.648858 | orchestrator | 08:05:42.648 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-02-19 08:05:42.648863 | orchestrator | 08:05:42.648 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-19 08:05:42.648869 | orchestrator | 08:05:42.648 STDOUT terraform:  } 2025-02-19 08:05:42.648874 | orchestrator | 08:05:42.648 STDOUT terraform:  } 2025-02-19 08:05:42.648880 | orchestrator | 08:05:42.648 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-02-19 08:05:42.648885 | orchestrator | 08:05:42.648 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-19 08:05:42.648891 | orchestrator | 08:05:42.648 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-19 08:05:42.648896 | orchestrator | 08:05:42.648 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-19 08:05:42.648902 | orchestrator | 08:05:42.648 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-19 08:05:42.648907 | orchestrator | 08:05:42.648 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.648913 | orchestrator | 08:05:42.648 STDOUT terraform:  + device_id = (known after apply) 2025-02-19 08:05:42.648918 | orchestrator | 08:05:42.648 STDOUT terraform:  + device_owner = (known after apply) 2025-02-19 08:05:42.648924 | orchestrator | 08:05:42.648 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-19 08:05:42.648933 | orchestrator | 08:05:42.648 STDOUT terraform:  + dns_name = (known after apply) 2025-02-19 08:05:42.648938 | orchestrator | 08:05:42.648 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.648944 | orchestrator | 08:05:42.648 STDOUT terraform:  + mac_address = (known after apply) 2025-02-19 08:05:42.648953 | orchestrator | 08:05:42.648 STDOUT terraform:  + network_id = (known after apply) 2025-02-19 08:05:42.648959 | orchestrator | 08:05:42.648 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-19 08:05:42.648966 | orchestrator | 08:05:42.648 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-19 08:05:42.648974 | orchestrator | 08:05:42.648 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.648980 | orchestrator | 08:05:42.648 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-19 08:05:42.648986 | orchestrator | 08:05:42.648 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.648991 | orchestrator | 08:05:42.648 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.648997 | orchestrator | 08:05:42.648 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-19 08:05:42.649017 | orchestrator | 08:05:42.648 STDOUT terraform:  } 2025-02-19 08:05:42.649023 | orchestrator | 08:05:42.648 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.649029 | orchestrator | 08:05:42.648 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-19 08:05:42.649034 | orchestrator | 08:05:42.648 STDOUT terraform:  } 2025-02-19 08:05:42.649039 | orchestrator | 08:05:42.648 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.649045 | orchestrator | 08:05:42.648 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-19 08:05:42.649050 | orchestrator | 08:05:42.648 STDOUT terraform:  } 2025-02-19 08:05:42.649056 | orchestrator | 08:05:42.648 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.649061 | orchestrator | 08:05:42.648 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-19 08:05:42.649069 | orchestrator | 08:05:42.648 STDOUT terraform:  } 2025-02-19 08:05:42.649126 | orchestrator | 08:05:42.648 STDOUT terraform:  + binding (known after apply) 2025-02-19 08:05:42.649136 | orchestrator | 08:05:42.648 STDOUT terraform:  + fixed_ip { 2025-02-19 08:05:42.649143 | orchestrator | 08:05:42.648 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-02-19 08:05:42.649151 | orchestrator | 08:05:42.649 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-19 08:05:42.649159 | orchestrator | 08:05:42.649 STDOUT terraform:  } 2025-02-19 08:05:42.649167 | orchestrator | 08:05:42.649 STDOUT terraform:  } 2025-02-19 08:05:42.649181 | orchestrator | 08:05:42.649 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-02-19 08:05:42.649189 | orchestrator | 08:05:42.649 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-19 08:05:42.649199 | orchestrator | 08:05:42.649 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-19 08:05:42.649249 | orchestrator | 08:05:42.649 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-19 08:05:42.649269 | orchestrator | 08:05:42.649 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-19 08:05:42.649301 | orchestrator | 08:05:42.649 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.649329 | orchestrator | 08:05:42.649 STDOUT terraform:  + device_id = (known after apply) 2025-02-19 08:05:42.649363 | orchestrator | 08:05:42.649 STDOUT terraform:  + device_owner = (known after apply) 2025-02-19 08:05:42.649428 | orchestrator | 08:05:42.649 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-19 08:05:42.649437 | orchestrator | 08:05:42.649 STDOUT terraform:  + dns_name = (known after apply) 2025-02-19 08:05:42.649474 | orchestrator | 08:05:42.649 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.649510 | orchestrator | 08:05:42.649 STDOUT terraform:  + mac_address = (known after apply) 2025-02-19 08:05:42.649544 | orchestrator | 08:05:42.649 STDOUT terraform:  + network_id = (known after apply) 2025-02-19 08:05:42.649578 | orchestrator | 08:05:42.649 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-19 08:05:42.649613 | orchestrator | 08:05:42.649 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-19 08:05:42.649649 | orchestrator | 08:05:42.649 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.649684 | orchestrator | 08:05:42.649 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-19 08:05:42.649719 | orchestrator | 08:05:42.649 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.649727 | orchestrator | 08:05:42.649 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.649763 | orchestrator | 08:05:42.649 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-19 08:05:42.649771 | orchestrator | 08:05:42.649 STDOUT terraform:  } 2025-02-19 08:05:42.649790 | orchestrator | 08:05:42.649 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.649819 | orchestrator | 08:05:42.649 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-19 08:05:42.649850 | orchestrator | 08:05:42.649 STDOUT terraform:  } 2025-02-19 08:05:42.649869 | orchestrator | 08:05:42.649 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.649878 | orchestrator | 08:05:42.649 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-19 08:05:42.649889 | orchestrator | 08:05:42.649 STDOUT terraform:  } 2025-02-19 08:05:42.649925 | orchestrator | 08:05:42.649 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.649939 | orchestrator | 08:05:42.649 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-19 08:05:42.649949 | orchestrator | 08:05:42.649 STDOUT terraform:  } 2025-02-19 08:05:42.649960 | orchestrator | 08:05:42.649 STDOUT terraform:  + binding (known after apply) 2025-02-19 08:05:42.649983 | orchestrator | 08:05:42.649 STDOUT terraform:  + fixed_ip { 2025-02-19 08:05:42.649994 | orchestrator | 08:05:42.649 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-02-19 08:05:42.650041 | orchestrator | 08:05:42.649 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-19 08:05:42.650052 | orchestrator | 08:05:42.649 STDOUT terraform:  } 2025-02-19 08:05:42.650071 | orchestrator | 08:05:42.650 STDOUT terraform:  } 2025-02-19 08:05:42.650099 | orchestrator | 08:05:42.650 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-02-19 08:05:42.650155 | orchestrator | 08:05:42.650 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-19 08:05:42.650179 | orchestrator | 08:05:42.650 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-19 08:05:42.650213 | orchestrator | 08:05:42.650 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-19 08:05:42.650249 | orchestrator | 08:05:42.650 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-19 08:05:42.650375 | orchestrator | 08:05:42.650 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.650412 | orchestrator | 08:05:42.650 STDOUT terraform:  + device_id = (known after apply) 2025-02-19 08:05:42.650448 | orchestrator | 08:05:42.650 STDOUT terraform:  + device_owner = (known after apply) 2025-02-19 08:05:42.650483 | orchestrator | 08:05:42.650 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-19 08:05:42.650549 | orchestrator | 08:05:42.650 STDOUT terraform:  + dns_name = (known after apply) 2025-02-19 08:05:42.650558 | orchestrator | 08:05:42.650 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.650586 | orchestrator | 08:05:42.650 STDOUT terraform:  + mac_address = (known after apply) 2025-02-19 08:05:42.650621 | orchestrator | 08:05:42.650 STDOUT terraform:  + network_id = (known after apply) 2025-02-19 08:05:42.650657 | orchestrator | 08:05:42.650 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-19 08:05:42.650691 | orchestrator | 08:05:42.650 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-19 08:05:42.650726 | orchestrator | 08:05:42.650 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.650762 | orchestrator | 08:05:42.650 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-19 08:05:42.650797 | orchestrator | 08:05:42.650 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.650819 | orchestrator | 08:05:42.650 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.650847 | orchestrator | 08:05:42.650 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-19 08:05:42.650857 | orchestrator | 08:05:42.650 STDOUT terraform:  } 2025-02-19 08:05:42.650877 | orchestrator | 08:05:42.650 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.650907 | orchestrator | 08:05:42.650 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-19 08:05:42.650913 | orchestrator | 08:05:42.650 STDOUT terraform:  } 2025-02-19 08:05:42.650936 | orchestrator | 08:05:42.650 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.650967 | orchestrator | 08:05:42.650 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-19 08:05:42.650974 | orchestrator | 08:05:42.650 STDOUT terraform:  } 2025-02-19 08:05:42.650993 | orchestrator | 08:05:42.650 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.651050 | orchestrator | 08:05:42.650 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-19 08:05:42.651065 | orchestrator | 08:05:42.651 STDOUT terraform:  } 2025-02-19 08:05:42.651071 | orchestrator | 08:05:42.651 STDOUT terraform:  + binding (known after apply) 2025-02-19 08:05:42.651089 | orchestrator | 08:05:42.651 STDOUT terraform:  + fixed_ip { 2025-02-19 08:05:42.651114 | orchestrator | 08:05:42.651 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-02-19 08:05:42.651147 | orchestrator | 08:05:42.651 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-19 08:05:42.651157 | orchestrator | 08:05:42.651 STDOUT terraform:  } 2025-02-19 08:05:42.651167 | orchestrator | 08:05:42.651 STDOUT terraform:  } 2025-02-19 08:05:42.651216 | orchestrator | 08:05:42.651 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-02-19 08:05:42.651257 | orchestrator | 08:05:42.651 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-19 08:05:42.651295 | orchestrator | 08:05:42.651 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-19 08:05:42.651331 | orchestrator | 08:05:42.651 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-19 08:05:42.651365 | orchestrator | 08:05:42.651 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-19 08:05:42.651400 | orchestrator | 08:05:42.651 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.651434 | orchestrator | 08:05:42.651 STDOUT terraform:  + device_id = (known after apply) 2025-02-19 08:05:42.651470 | orchestrator | 08:05:42.651 STDOUT terraform:  + device_owner = (known after apply) 2025-02-19 08:05:42.651504 | orchestrator | 08:05:42.651 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-19 08:05:42.651541 | orchestrator | 08:05:42.651 STDOUT terraform:  + dns_name = (known after apply) 2025-02-19 08:05:42.651579 | orchestrator | 08:05:42.651 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.651614 | orchestrator | 08:05:42.651 STDOUT terraform:  + mac_address = (known after apply) 2025-02-19 08:05:42.651651 | orchestrator | 08:05:42.651 STDOUT terraform:  + network_id = (known after apply) 2025-02-19 08:05:42.651685 | orchestrator | 08:05:42.651 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-19 08:05:42.651720 | orchestrator | 08:05:42.651 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-19 08:05:42.651756 | orchestrator | 08:05:42.651 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.651791 | orchestrator | 08:05:42.651 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-19 08:05:42.651826 | orchestrator | 08:05:42.651 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.651838 | orchestrator | 08:05:42.651 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.651871 | orchestrator | 08:05:42.651 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-19 08:05:42.651882 | orchestrator | 08:05:42.651 STDOUT terraform:  } 2025-02-19 08:05:42.651893 | orchestrator | 08:05:42.651 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.651926 | orchestrator | 08:05:42.651 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-19 08:05:42.651945 | orchestrator | 08:05:42.651 STDOUT terraform:  } 2025-02-19 08:05:42.651975 | orchestrator | 08:05:42.651 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.651987 | orchestrator | 08:05:42.651 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-19 08:05:42.651997 | orchestrator | 08:05:42.651 STDOUT terraform:  } 2025-02-19 08:05:42.652017 | orchestrator | 08:05:42.651 STDOUT terraform:  + allowed_address_pairs { 2025-02-19 08:05:42.652042 | orchestrator | 08:05:42.651 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-19 08:05:42.652053 | orchestrator | 08:05:42.652 STDOUT terraform:  } 2025-02-19 08:05:42.652064 | orchestrator | 08:05:42.652 STDOUT terraform:  + binding (known after apply) 2025-02-19 08:05:42.652075 | orchestrator | 08:05:42.652 STDOUT terraform:  + fixed_ip { 2025-02-19 08:05:42.652115 | orchestrator | 08:05:42.652 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-02-19 08:05:42.652158 | orchestrator | 08:05:42.652 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-19 08:05:42.652170 | orchestrator | 08:05:42.652 STDOUT terraform:  } 2025-02-19 08:05:42.652181 | orchestrator | 08:05:42.652 STDOUT terraform:  } 2025-02-19 08:05:42.652251 | orchestrator | 08:05:42.652 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-02-19 08:05:42.652322 | orchestrator | 08:05:42.652 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-02-19 08:05:42.652334 | orchestrator | 08:05:42.652 STDOUT terraform:  + force_destroy = false 2025-02-19 08:05:42.652363 | orchestrator | 08:05:42.652 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.652392 | orchestrator | 08:05:42.652 STDOUT terraform:  + port_id = (known after apply) 2025-02-19 08:05:42.652421 | orchestrator | 08:05:42.652 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.652449 | orchestrator | 08:05:42.652 STDOUT terraform:  + router_id = (known after apply) 2025-02-19 08:05:42.652479 | orchestrator | 08:05:42.652 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-19 08:05:42.652490 | orchestrator | 08:05:42.652 STDOUT terraform:  } 2025-02-19 08:05:42.652544 | orchestrator | 08:05:42.652 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-02-19 08:05:42.652590 | orchestrator | 08:05:42.652 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-02-19 08:05:42.652626 | orchestrator | 08:05:42.652 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-19 08:05:42.652663 | orchestrator | 08:05:42.652 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.652687 | orchestrator | 08:05:42.652 STDOUT terraform:  + availability_zone_hints = [ 2025-02-19 08:05:42.652706 | orchestrator | 08:05:42.652 STDOUT terraform:  + "nova", 2025-02-19 08:05:42.652718 | orchestrator | 08:05:42.652 STDOUT terraform:  ] 2025-02-19 08:05:42.652754 | orchestrator | 08:05:42.652 STDOUT terraform:  + distributed = (known after apply) 2025-02-19 08:05:42.652788 | orchestrator | 08:05:42.652 STDOUT terraform:  + enable_snat = (known after apply) 2025-02-19 08:05:42.652837 | orchestrator | 08:05:42.652 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-02-19 08:05:42.652873 | orchestrator | 08:05:42.652 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.652903 | orchestrator | 08:05:42.652 STDOUT terraform:  + name = "testbed" 2025-02-19 08:05:42.652942 | orchestrator | 08:05:42.652 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.652976 | orchestrator | 08:05:42.652 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.653036 | orchestrator | 08:05:42.652 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-02-19 08:05:42.653079 | orchestrator | 08:05:42.652 STDOUT terraform:  } 2025-02-19 08:05:42.653086 | orchestrator | 08:05:42.653 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-02-19 08:05:42.653143 | orchestrator | 08:05:42.653 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-02-19 08:05:42.653165 | orchestrator | 08:05:42.653 STDOUT terraform:  + description = "ssh" 2025-02-19 08:05:42.653223 | orchestrator | 08:05:42.653 STDOUT terraform:  + direction = "ingress" 2025-02-19 08:05:42.653230 | orchestrator | 08:05:42.653 STDOUT terraform:  + ethertype = "IPv4" 2025-02-19 08:05:42.653236 | orchestrator | 08:05:42.653 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.653262 | orchestrator | 08:05:42.653 STDOUT terraform:  + port_range_max = 22 2025-02-19 08:05:42.653291 | orchestrator | 08:05:42.653 STDOUT terraform:  + port_range_min = 22 2025-02-19 08:05:42.653303 | orchestrator | 08:05:42.653 STDOUT terraform:  + protocol = "tcp" 2025-02-19 08:05:42.653340 | orchestrator | 08:05:42.653 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.653367 | orchestrator | 08:05:42.653 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-19 08:05:42.653388 | orchestrator | 08:05:42.653 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-19 08:05:42.653419 | orchestrator | 08:05:42.653 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-19 08:05:42.653451 | orchestrator | 08:05:42.653 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.653458 | orchestrator | 08:05:42.653 STDOUT terraform:  } 2025-02-19 08:05:42.653511 | orchestrator | 08:05:42.653 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-02-19 08:05:42.653574 | orchestrator | 08:05:42.653 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-02-19 08:05:42.653610 | orchestrator | 08:05:42.653 STDOUT terraform:  + description = "wireguard" 2025-02-19 08:05:42.653643 | orchestrator | 08:05:42.653 STDOUT terraform:  + direction = "ingress" 2025-02-19 08:05:42.653666 | orchestrator | 08:05:42.653 STDOUT terraform:  + ethertype = "IPv4" 2025-02-19 08:05:42.653697 | orchestrator | 08:05:42.653 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.653719 | orchestrator | 08:05:42.653 STDOUT terraform:  + port_range_max = 51820 2025-02-19 08:05:42.653732 | orchestrator | 08:05:42.653 STDOUT terraform:  + port_range_min = 51820 2025-02-19 08:05:42.653755 | orchestrator | 08:05:42.653 STDOUT terraform:  + protocol = "udp" 2025-02-19 08:05:42.653787 | orchestrator | 08:05:42.653 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.653816 | orchestrator | 08:05:42.653 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-19 08:05:42.653849 | orchestrator | 08:05:42.653 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-19 08:05:42.653883 | orchestrator | 08:05:42.653 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-19 08:05:42.653913 | orchestrator | 08:05:42.653 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.653929 | orchestrator | 08:05:42.653 STDOUT terraform:  } 2025-02-19 08:05:42.653982 | orchestrator | 08:05:42.653 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-02-19 08:05:42.654073 | orchestrator | 08:05:42.653 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-02-19 08:05:42.654088 | orchestrator | 08:05:42.654 STDOUT terraform:  + direction = "ingress" 2025-02-19 08:05:42.654112 | orchestrator | 08:05:42.654 STDOUT terraform:  + ethertype = "IPv4" 2025-02-19 08:05:42.654142 | orchestrator | 08:05:42.654 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.654165 | orchestrator | 08:05:42.654 STDOUT terraform:  + protocol = "tcp" 2025-02-19 08:05:42.654195 | orchestrator | 08:05:42.654 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.654225 | orchestrator | 08:05:42.654 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-19 08:05:42.654253 | orchestrator | 08:05:42.654 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-02-19 08:05:42.654283 | orchestrator | 08:05:42.654 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-19 08:05:42.654314 | orchestrator | 08:05:42.654 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.654321 | orchestrator | 08:05:42.654 STDOUT terraform:  } 2025-02-19 08:05:42.654379 | orchestrator | 08:05:42.654 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-02-19 08:05:42.654432 | orchestrator | 08:05:42.654 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-02-19 08:05:42.654452 | orchestrator | 08:05:42.654 STDOUT terraform:  + direction = "ingress" 2025-02-19 08:05:42.654476 | orchestrator | 08:05:42.654 STDOUT terraform:  + ethertype = "IPv4" 2025-02-19 08:05:42.654507 | orchestrator | 08:05:42.654 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.654519 | orchestrator | 08:05:42.654 STDOUT terraform:  + protocol = "udp" 2025-02-19 08:05:42.654551 | orchestrator | 08:05:42.654 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.654582 | orchestrator | 08:05:42.654 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-19 08:05:42.654616 | orchestrator | 08:05:42.654 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-02-19 08:05:42.654642 | orchestrator | 08:05:42.654 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-19 08:05:42.654673 | orchestrator | 08:05:42.654 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.654729 | orchestrator | 08:05:42.654 STDOUT terraform:  } 2025-02-19 08:05:42.654735 | orchestrator | 08:05:42.654 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-02-19 08:05:42.654782 | orchestrator | 08:05:42.654 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-02-19 08:05:42.654805 | orchestrator | 08:05:42.654 STDOUT terraform:  + direction = "ingress" 2025-02-19 08:05:42.654825 | orchestrator | 08:05:42.654 STDOUT terraform:  + ethertype = "IPv4" 2025-02-19 08:05:42.654856 | orchestrator | 08:05:42.654 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.654876 | orchestrator | 08:05:42.654 STDOUT terraform:  + protocol = "icmp" 2025-02-19 08:05:42.654905 | orchestrator | 08:05:42.654 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.654934 | orchestrator | 08:05:42.654 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-19 08:05:42.654961 | orchestrator | 08:05:42.654 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-19 08:05:42.654991 | orchestrator | 08:05:42.654 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-19 08:05:42.655047 | orchestrator | 08:05:42.654 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.655061 | orchestrator | 08:05:42.655 STDOUT terraform:  } 2025-02-19 08:05:42.655109 | orchestrator | 08:05:42.655 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-02-19 08:05:42.655155 | orchestrator | 08:05:42.655 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-02-19 08:05:42.655181 | orchestrator | 08:05:42.655 STDOUT terraform:  + direction = "ingress" 2025-02-19 08:05:42.655202 | orchestrator | 08:05:42.655 STDOUT terraform:  + ethertype = "IPv4" 2025-02-19 08:05:42.655234 | orchestrator | 08:05:42.655 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.655254 | orchestrator | 08:05:42.655 STDOUT terraform:  + protocol = "tcp" 2025-02-19 08:05:42.655302 | orchestrator | 08:05:42.655 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.655349 | orchestrator | 08:05:42.655 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-19 08:05:42.655374 | orchestrator | 08:05:42.655 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-19 08:05:42.655403 | orchestrator | 08:05:42.655 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-19 08:05:42.655435 | orchestrator | 08:05:42.655 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.655452 | orchestrator | 08:05:42.655 STDOUT terraform:  } 2025-02-19 08:05:42.655502 | orchestrator | 08:05:42.655 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-02-19 08:05:42.655552 | orchestrator | 08:05:42.655 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-02-19 08:05:42.655575 | orchestrator | 08:05:42.655 STDOUT terraform:  + direction = "ingress" 2025-02-19 08:05:42.655596 | orchestrator | 08:05:42.655 STDOUT terraform:  + ethertype = "IPv4" 2025-02-19 08:05:42.655628 | orchestrator | 08:05:42.655 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.655647 | orchestrator | 08:05:42.655 STDOUT terraform:  + protocol = "udp" 2025-02-19 08:05:42.655680 | orchestrator | 08:05:42.655 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.655711 | orchestrator | 08:05:42.655 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-19 08:05:42.655735 | orchestrator | 08:05:42.655 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-19 08:05:42.655766 | orchestrator | 08:05:42.655 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-19 08:05:42.655796 | orchestrator | 08:05:42.655 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.655808 | orchestrator | 08:05:42.655 STDOUT terraform:  } 2025-02-19 08:05:42.655852 | orchestrator | 08:05:42.655 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-02-19 08:05:42.655904 | orchestrator | 08:05:42.655 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-02-19 08:05:42.655928 | orchestrator | 08:05:42.655 STDOUT terraform:  + direction = "ingress" 2025-02-19 08:05:42.655940 | orchestrator | 08:05:42.655 STDOUT terraform:  + ethertype = "IPv4" 2025-02-19 08:05:42.655971 | orchestrator | 08:05:42.655 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.655990 | orchestrator | 08:05:42.655 STDOUT terraform:  + protocol = "icmp" 2025-02-19 08:05:42.656031 | orchestrator | 08:05:42.655 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.656060 | orchestrator | 08:05:42.656 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-19 08:05:42.656083 | orchestrator | 08:05:42.656 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-19 08:05:42.656113 | orchestrator | 08:05:42.656 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-19 08:05:42.656145 | orchestrator | 08:05:42.656 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.656153 | orchestrator | 08:05:42.656 STDOUT terraform:  } 2025-02-19 08:05:42.656204 | orchestrator | 08:05:42.656 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-02-19 08:05:42.656252 | orchestrator | 08:05:42.656 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-02-19 08:05:42.656272 | orchestrator | 08:05:42.656 STDOUT terraform:  + description = "vrrp" 2025-02-19 08:05:42.656297 | orchestrator | 08:05:42.656 STDOUT terraform:  + direction = "ingress" 2025-02-19 08:05:42.656317 | orchestrator | 08:05:42.656 STDOUT terraform:  + ethertype = "IPv4" 2025-02-19 08:05:42.656347 | orchestrator | 08:05:42.656 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.656368 | orchestrator | 08:05:42.656 STDOUT terraform:  + protocol = "112" 2025-02-19 08:05:42.656398 | orchestrator | 08:05:42.656 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.656430 | orchestrator | 08:05:42.656 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-19 08:05:42.656450 | orchestrator | 08:05:42.656 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-19 08:05:42.656480 | orchestrator | 08:05:42.656 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-19 08:05:42.656509 | orchestrator | 08:05:42.656 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.656516 | orchestrator | 08:05:42.656 STDOUT terraform:  } 2025-02-19 08:05:42.656566 | orchestrator | 08:05:42.656 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-02-19 08:05:42.656614 | orchestrator | 08:05:42.656 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-02-19 08:05:42.656641 | orchestrator | 08:05:42.656 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.656679 | orchestrator | 08:05:42.656 STDOUT terraform:  + description = "management security group" 2025-02-19 08:05:42.656707 | orchestrator | 08:05:42.656 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.656736 | orchestrator | 08:05:42.656 STDOUT terraform:  + name = "testbed-management" 2025-02-19 08:05:42.656760 | orchestrator | 08:05:42.656 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.656786 | orchestrator | 08:05:42.656 STDOUT terraform:  + stateful = (known after apply) 2025-02-19 08:05:42.656814 | orchestrator | 08:05:42.656 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.656821 | orchestrator | 08:05:42.656 STDOUT terraform:  } 2025-02-19 08:05:42.656868 | orchestrator | 08:05:42.656 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-02-19 08:05:42.656911 | orchestrator | 08:05:42.656 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-02-19 08:05:42.656941 | orchestrator | 08:05:42.656 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.656969 | orchestrator | 08:05:42.656 STDOUT terraform:  + description = "node security group" 2025-02-19 08:05:42.656998 | orchestrator | 08:05:42.656 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.657040 | orchestrator | 08:05:42.656 STDOUT terraform:  + name = "testbed-node" 2025-02-19 08:05:42.657047 | orchestrator | 08:05:42.657 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.657078 | orchestrator | 08:05:42.657 STDOUT terraform:  + stateful = (known after apply) 2025-02-19 08:05:42.657108 | orchestrator | 08:05:42.657 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.657116 | orchestrator | 08:05:42.657 STDOUT terraform:  } 2025-02-19 08:05:42.657160 | orchestrator | 08:05:42.657 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-02-19 08:05:42.657203 | orchestrator | 08:05:42.657 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-02-19 08:05:42.657232 | orchestrator | 08:05:42.657 STDOUT terraform:  + all_tags = (known after apply) 2025-02-19 08:05:42.657261 | orchestrator | 08:05:42.657 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-02-19 08:05:42.657281 | orchestrator | 08:05:42.657 STDOUT terraform:  + dns_nameservers = [ 2025-02-19 08:05:42.657293 | orchestrator | 08:05:42.657 STDOUT terraform:  + "8.8.8.8", 2025-02-19 08:05:42.657310 | orchestrator | 08:05:42.657 STDOUT terraform:  + "9.9.9.9", 2025-02-19 08:05:42.657317 | orchestrator | 08:05:42.657 STDOUT terraform:  ] 2025-02-19 08:05:42.657340 | orchestrator | 08:05:42.657 STDOUT terraform:  + enable_dhcp = true 2025-02-19 08:05:42.657369 | orchestrator | 08:05:42.657 STDOUT terraform:  + gateway_ip = (known after apply) 2025-02-19 08:05:42.657412 | orchestrator | 08:05:42.657 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.657432 | orchestrator | 08:05:42.657 STDOUT terraform:  + ip_version = 4 2025-02-19 08:05:42.657463 | orchestrator | 08:05:42.657 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-02-19 08:05:42.657493 | orchestrator | 08:05:42.657 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-02-19 08:05:42.657530 | orchestrator | 08:05:42.657 STDOUT terraform:  + name = "subnet-testbed-management" 2025-02-19 08:05:42.657562 | orchestrator | 08:05:42.657 STDOUT terraform:  + network_id = (known after apply) 2025-02-19 08:05:42.657582 | orchestrator | 08:05:42.657 STDOUT terraform:  + no_gateway = false 2025-02-19 08:05:42.657613 | orchestrator | 08:05:42.657 STDOUT terraform:  + region = (known after apply) 2025-02-19 08:05:42.657642 | orchestrator | 08:05:42.657 STDOUT terraform:  + service_types = (known after apply) 2025-02-19 08:05:42.657672 | orchestrator | 08:05:42.657 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-19 08:05:42.657692 | orchestrator | 08:05:42.657 STDOUT terraform:  + allocation_pool { 2025-02-19 08:05:42.657716 | orchestrator | 08:05:42.657 STDOUT terraform:  + end = "192.168.31.250" 2025-02-19 08:05:42.657739 | orchestrator | 08:05:42.657 STDOUT terraform:  + start = "192.168.31.200" 2025-02-19 08:05:42.657746 | orchestrator | 08:05:42.657 STDOUT terraform:  } 2025-02-19 08:05:42.657767 | orchestrator | 08:05:42.657 STDOUT terraform:  } 2025-02-19 08:05:42.657792 | orchestrator | 08:05:42.657 STDOUT terraform:  # terraform_data.image will be created 2025-02-19 08:05:42.657815 | orchestrator | 08:05:42.657 STDOUT terraform:  + resource "terraform_data" "image" { 2025-02-19 08:05:42.657839 | orchestrator | 08:05:42.657 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.657850 | orchestrator | 08:05:42.657 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-02-19 08:05:42.657879 | orchestrator | 08:05:42.657 STDOUT terraform:  + output = (known after apply) 2025-02-19 08:05:42.657891 | orchestrator | 08:05:42.657 STDOUT terraform:  } 2025-02-19 08:05:42.657917 | orchestrator | 08:05:42.657 STDOUT terraform:  # terraform_data.image_node will be created 2025-02-19 08:05:42.657944 | orchestrator | 08:05:42.657 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-02-19 08:05:42.657965 | orchestrator | 08:05:42.657 STDOUT terraform:  + id = (known after apply) 2025-02-19 08:05:42.657984 | orchestrator | 08:05:42.657 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-02-19 08:05:42.658039 | orchestrator | 08:05:42.657 STDOUT terraform:  + output = (known after apply) 2025-02-19 08:05:42.658069 | orchestrator | 08:05:42.658 STDOUT terraform:  } 2025-02-19 08:05:42.658082 | orchestrator | 08:05:42.658 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-02-19 08:05:42.658102 | orchestrator | 08:05:42.658 STDOUT terraform: Changes to Outputs: 2025-02-19 08:05:42.658109 | orchestrator | 08:05:42.658 STDOUT terraform:  + manager_address = (sensitive value) 2025-02-19 08:05:42.658128 | orchestrator | 08:05:42.658 STDOUT terraform:  + private_key = (sensitive value) 2025-02-19 08:05:42.876317 | orchestrator | 08:05:42.875 STDOUT terraform: terraform_data.image: Creating... 2025-02-19 08:05:42.876403 | orchestrator | 08:05:42.875 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=f54da907-c7b8-ba22-f821-758957062cba] 2025-02-19 08:05:42.876420 | orchestrator | 08:05:42.876 STDOUT terraform: terraform_data.image_node: Creating... 2025-02-19 08:05:42.879065 | orchestrator | 08:05:42.878 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=92f84b38-782e-0035-96bc-88ecc40d9ed2] 2025-02-19 08:05:42.892803 | orchestrator | 08:05:42.891 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-02-19 08:05:42.897132 | orchestrator | 08:05:42.896 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-02-19 08:05:42.904358 | orchestrator | 08:05:42.904 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-02-19 08:05:42.906565 | orchestrator | 08:05:42.905 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-02-19 08:05:42.907185 | orchestrator | 08:05:42.907 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-02-19 08:05:42.908109 | orchestrator | 08:05:42.907 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-02-19 08:05:42.908144 | orchestrator | 08:05:42.907 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-02-19 08:05:42.908609 | orchestrator | 08:05:42.908 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-02-19 08:05:42.909099 | orchestrator | 08:05:42.909 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-02-19 08:05:42.913302 | orchestrator | 08:05:42.913 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-02-19 08:05:43.352235 | orchestrator | 08:05:43.351 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-02-19 08:05:43.357424 | orchestrator | 08:05:43.356 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-02-19 08:05:43.362215 | orchestrator | 08:05:43.362 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-02-19 08:05:43.365569 | orchestrator | 08:05:43.365 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-02-19 08:05:43.500728 | orchestrator | 08:05:43.500 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-02-19 08:05:43.507868 | orchestrator | 08:05:43.507 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-02-19 08:05:48.882407 | orchestrator | 08:05:48.881 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=891a43be-4483-40c3-844a-da0133bbf15a] 2025-02-19 08:05:50.040934 | orchestrator | 08:05:48.890 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-02-19 08:05:52.905341 | orchestrator | 08:05:52.904 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-02-19 08:05:52.906344 | orchestrator | 08:05:52.906 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-02-19 08:05:52.908497 | orchestrator | 08:05:52.908 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-02-19 08:05:52.909701 | orchestrator | 08:05:52.909 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-02-19 08:05:52.909790 | orchestrator | 08:05:52.909 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-02-19 08:05:52.914204 | orchestrator | 08:05:52.913 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-02-19 08:05:53.363384 | orchestrator | 08:05:53.363 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-02-19 08:05:53.365465 | orchestrator | 08:05:53.365 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-02-19 08:05:53.508899 | orchestrator | 08:05:53.508 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-02-19 08:05:53.544983 | orchestrator | 08:05:53.544 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 11s [id=923f2b44-0879-4277-a106-844be4b2565d] 2025-02-19 08:05:53.550714 | orchestrator | 08:05:53.550 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-02-19 08:05:53.562155 | orchestrator | 08:05:53.561 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=5c11fa33-d2ef-45ea-bc93-56551b069e33] 2025-02-19 08:05:53.567771 | orchestrator | 08:05:53.567 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-02-19 08:05:53.586782 | orchestrator | 08:05:53.586 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=21743850-c155-402b-9a95-271bd8472759] 2025-02-19 08:05:53.592581 | orchestrator | 08:05:53.592 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-02-19 08:05:53.595948 | orchestrator | 08:05:53.595 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=91d4d525-aaae-41a7-908a-2e5d882c10b9] 2025-02-19 08:05:53.603816 | orchestrator | 08:05:53.603 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-02-19 08:05:53.633250 | orchestrator | 08:05:53.633 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 11s [id=116ec19e-6576-4adf-ada1-59164a5d1c9f] 2025-02-19 08:05:53.641467 | orchestrator | 08:05:53.640 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-02-19 08:05:53.652450 | orchestrator | 08:05:53.640 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=7ac42676-4a1f-422d-9e47-87a492d5a795] 2025-02-19 08:05:53.652531 | orchestrator | 08:05:53.652 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-02-19 08:05:53.665427 | orchestrator | 08:05:53.665 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 11s [id=6cdb92e8-c898-48ca-adcb-2a30d1567e49] 2025-02-19 08:05:53.672813 | orchestrator | 08:05:53.672 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-02-19 08:05:53.677944 | orchestrator | 08:05:53.677 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 11s [id=933f95c9-b090-4d95-b9b7-90a087e62286] 2025-02-19 08:05:53.685529 | orchestrator | 08:05:53.685 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-02-19 08:05:53.732716 | orchestrator | 08:05:53.732 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=420ab18e-fdcb-4974-b92c-678938c23e9b] 2025-02-19 08:05:53.741599 | orchestrator | 08:05:53.741 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-02-19 08:05:58.891743 | orchestrator | 08:05:58.891 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-02-19 08:05:59.072633 | orchestrator | 08:05:59.072 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=0f115ae7-332f-47b5-bfba-4efd1297123a] 2025-02-19 08:05:59.081291 | orchestrator | 08:05:59.081 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-02-19 08:06:03.552409 | orchestrator | 08:06:03.552 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-02-19 08:06:03.568874 | orchestrator | 08:06:03.568 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-02-19 08:06:03.594390 | orchestrator | 08:06:03.593 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-02-19 08:06:03.605602 | orchestrator | 08:06:03.605 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-02-19 08:06:03.641093 | orchestrator | 08:06:03.640 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-02-19 08:06:03.653249 | orchestrator | 08:06:03.653 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-02-19 08:06:03.673536 | orchestrator | 08:06:03.673 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-02-19 08:06:03.686761 | orchestrator | 08:06:03.686 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-02-19 08:06:03.743110 | orchestrator | 08:06:03.742 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-02-19 08:06:03.747022 | orchestrator | 08:06:03.746 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=eb5d754e-727a-4983-9d71-2a65afff7a52] 2025-02-19 08:06:03.761518 | orchestrator | 08:06:03.761 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-02-19 08:06:03.772683 | orchestrator | 08:06:03.772 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=d6c08883-a737-4166-bae3-29df7aca0544] 2025-02-19 08:06:03.786410 | orchestrator | 08:06:03.786 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-02-19 08:06:03.806839 | orchestrator | 08:06:03.806 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=00a01370-945d-463a-a32d-5e52b5234eb4] 2025-02-19 08:06:03.815686 | orchestrator | 08:06:03.815 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-02-19 08:06:03.829287 | orchestrator | 08:06:03.828 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=ae299bec-d23f-4bd0-a551-f66f5e1afde1] 2025-02-19 08:06:03.845490 | orchestrator | 08:06:03.845 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-02-19 08:06:03.853876 | orchestrator | 08:06:03.853 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=446a9b87ecb7a24177cb04e5de205454d982b0f2] 2025-02-19 08:06:03.857245 | orchestrator | 08:06:03.854 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=06a3a42c-cb57-4c14-955c-f9e446b3a982] 2025-02-19 08:06:03.857747 | orchestrator | 08:06:03.857 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=0c5208c8-9aa1-4e87-9cdb-910770e18a0c] 2025-02-19 08:06:03.861311 | orchestrator | 08:06:03.861 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-02-19 08:06:03.867504 | orchestrator | 08:06:03.867 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-02-19 08:06:03.872136 | orchestrator | 08:06:03.871 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-02-19 08:06:03.880905 | orchestrator | 08:06:03.880 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=9556104a6136a54b7cfb3e897d9e538b9b669b58] 2025-02-19 08:06:03.886599 | orchestrator | 08:06:03.886 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-02-19 08:06:03.890415 | orchestrator | 08:06:03.890 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=b50482d4-467d-4151-94c3-bb810c8ecc19] 2025-02-19 08:06:03.913423 | orchestrator | 08:06:03.913 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=69806146-708c-4195-b6c7-ec061db9d03d] 2025-02-19 08:06:04.083748 | orchestrator | 08:06:04.083 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=293d9137-13ea-4af5-8ad3-3b58f6addeb8] 2025-02-19 08:06:09.082526 | orchestrator | 08:06:09.082 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-02-19 08:06:09.371181 | orchestrator | 08:06:09.370 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=2c471650-030e-4cf1-9d5e-edaf33164d92] 2025-02-19 08:06:09.667732 | orchestrator | 08:06:09.667 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=6de48132-1866-4a24-bbf2-7892a775b450] 2025-02-19 08:06:09.674183 | orchestrator | 08:06:09.673 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-02-19 08:06:13.762870 | orchestrator | 08:06:13.762 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-02-19 08:06:13.788135 | orchestrator | 08:06:13.787 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-02-19 08:06:13.817368 | orchestrator | 08:06:13.817 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-02-19 08:06:13.862971 | orchestrator | 08:06:13.862 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-02-19 08:06:13.868154 | orchestrator | 08:06:13.867 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-02-19 08:06:14.094733 | orchestrator | 08:06:14.094 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=6c38e120-2a61-498a-a8ca-bc35055fc2f6] 2025-02-19 08:06:14.140910 | orchestrator | 08:06:14.140 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=0573e752-03bc-434b-92ad-736ac2b2aef9] 2025-02-19 08:06:14.251126 | orchestrator | 08:06:14.250 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283] 2025-02-19 08:06:14.251539 | orchestrator | 08:06:14.251 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=c2f313e9-cec4-4f16-a2dd-db2bae446cdb] 2025-02-19 08:06:14.262794 | orchestrator | 08:06:14.262 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=e5676c63-b799-41fe-bf82-9c0ce222d8b3] 2025-02-19 08:06:17.395922 | orchestrator | 08:06:17.395 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=606c8ea3-91cd-4447-a3fe-6a53cd28d201] 2025-02-19 08:06:17.403077 | orchestrator | 08:06:17.402 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-02-19 08:06:17.406518 | orchestrator | 08:06:17.406 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-02-19 08:06:17.410218 | orchestrator | 08:06:17.410 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-02-19 08:06:17.546276 | orchestrator | 08:06:17.544 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=c1dc1288-0121-44bc-9265-22790688df44] 2025-02-19 08:06:17.558471 | orchestrator | 08:06:17.558 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=442af545-7c02-40b0-8828-31350868ab57] 2025-02-19 08:06:17.567098 | orchestrator | 08:06:17.566 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-02-19 08:06:17.567265 | orchestrator | 08:06:17.567 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-02-19 08:06:17.568521 | orchestrator | 08:06:17.568 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-02-19 08:06:17.568681 | orchestrator | 08:06:17.568 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-02-19 08:06:17.570108 | orchestrator | 08:06:17.569 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-02-19 08:06:17.580348 | orchestrator | 08:06:17.577 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-02-19 08:06:17.717975 | orchestrator | 08:06:17.577 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-02-19 08:06:17.718185 | orchestrator | 08:06:17.577 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-02-19 08:06:17.718205 | orchestrator | 08:06:17.578 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-02-19 08:06:17.718235 | orchestrator | 08:06:17.717 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=41aa4a5a-5a63-4040-ab23-9ee5e0d7bd94] 2025-02-19 08:06:17.733366 | orchestrator | 08:06:17.733 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-02-19 08:06:17.865264 | orchestrator | 08:06:17.864 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=690262a4-6d53-45bf-a137-60c13ebf1ccc] 2025-02-19 08:06:17.882254 | orchestrator | 08:06:17.881 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-02-19 08:06:18.073886 | orchestrator | 08:06:18.073 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=052a164d-35f0-423c-a338-0707b5bdac39] 2025-02-19 08:06:18.080877 | orchestrator | 08:06:18.080 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-02-19 08:06:18.184129 | orchestrator | 08:06:18.183 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=69bfa87e-5b95-4b56-9589-53843d10f993] 2025-02-19 08:06:18.190419 | orchestrator | 08:06:18.190 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-02-19 08:06:18.296764 | orchestrator | 08:06:18.296 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=975f9fe6-ecc8-4206-9e59-76f98ef71c3b] 2025-02-19 08:06:18.305330 | orchestrator | 08:06:18.305 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-02-19 08:06:18.577348 | orchestrator | 08:06:18.576 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=50486d05-0aeb-47d7-9b26-59f22337212a] 2025-02-19 08:06:18.584135 | orchestrator | 08:06:18.583 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-02-19 08:06:18.746177 | orchestrator | 08:06:18.745 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=a4a4bd5b-cebd-4b3d-b859-22862af988f7] 2025-02-19 08:06:18.761179 | orchestrator | 08:06:18.760 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-02-19 08:06:18.889559 | orchestrator | 08:06:18.889 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=37d15e35-068c-48fe-bbd9-78612239a93a] 2025-02-19 08:06:19.040240 | orchestrator | 08:06:19.039 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=d68c218d-7802-44e1-a101-bd788cf6a752] 2025-02-19 08:06:23.252874 | orchestrator | 08:06:23.252 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=19761e93-447a-4d24-bf37-bdfeb7b4d25b] 2025-02-19 08:06:23.257415 | orchestrator | 08:06:23.257 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=dc372622-e2f7-4bb8-917d-acee4137ac73] 2025-02-19 08:06:23.489411 | orchestrator | 08:06:23.489 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=f925c5d2-dcc7-43b6-aced-6094d884843a] 2025-02-19 08:06:23.928426 | orchestrator | 08:06:23.927 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=7ef4ef32-4f6f-4cf6-8aeb-f4ef199a9a06] 2025-02-19 08:06:23.942201 | orchestrator | 08:06:23.941 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=57ae0d35-d09a-4ff4-8d67-47b12cab3fd9] 2025-02-19 08:06:24.029634 | orchestrator | 08:06:24.029 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=f6ac2a28-a206-47e3-a850-5bd9b390209a] 2025-02-19 08:06:24.282083 | orchestrator | 08:06:24.281 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=9e45572f-f065-44e7-9db5-6b5d239e3c1e] 2025-02-19 08:06:24.496702 | orchestrator | 08:06:24.494 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=a512242d-f970-4359-ac54-5d3eaeab5724] 2025-02-19 08:06:24.514770 | orchestrator | 08:06:24.514 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-02-19 08:06:24.528951 | orchestrator | 08:06:24.528 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-02-19 08:06:24.529519 | orchestrator | 08:06:24.529 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-02-19 08:06:24.537726 | orchestrator | 08:06:24.537 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-02-19 08:06:24.545925 | orchestrator | 08:06:24.545 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-02-19 08:06:24.546489 | orchestrator | 08:06:24.546 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-02-19 08:06:24.557377 | orchestrator | 08:06:24.554 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-02-19 08:06:30.963494 | orchestrator | 08:06:30.963 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=f05e57c7-f9ce-43cb-afa3-9c4c71397eff] 2025-02-19 08:06:30.977664 | orchestrator | 08:06:30.977 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-02-19 08:06:30.981898 | orchestrator | 08:06:30.981 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-02-19 08:06:30.982265 | orchestrator | 08:06:30.982 STDOUT terraform: local_file.inventory: Creating... 2025-02-19 08:06:30.987855 | orchestrator | 08:06:30.987 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=7aa5fb3cf621ed7fa207d8eb62814b64ba55c68b] 2025-02-19 08:06:30.989193 | orchestrator | 08:06:30.988 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=41ca8d41c154eaddb634438e1cf1bb892201eb4f] 2025-02-19 08:06:31.451705 | orchestrator | 08:06:31.451 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=f05e57c7-f9ce-43cb-afa3-9c4c71397eff] 2025-02-19 08:06:34.531371 | orchestrator | 08:06:34.531 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-02-19 08:06:34.542383 | orchestrator | 08:06:34.542 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-02-19 08:06:34.544649 | orchestrator | 08:06:34.544 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-02-19 08:06:34.546933 | orchestrator | 08:06:34.546 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-02-19 08:06:34.548044 | orchestrator | 08:06:34.547 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-02-19 08:06:34.554560 | orchestrator | 08:06:34.554 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-02-19 08:06:44.532447 | orchestrator | 08:06:44.532 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-02-19 08:06:44.543323 | orchestrator | 08:06:44.542 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-02-19 08:06:44.545726 | orchestrator | 08:06:44.545 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-02-19 08:06:44.547922 | orchestrator | 08:06:44.547 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-02-19 08:06:44.549129 | orchestrator | 08:06:44.548 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-02-19 08:06:44.555515 | orchestrator | 08:06:44.555 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-02-19 08:06:54.534337 | orchestrator | 08:06:54.533 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-02-19 08:06:54.544431 | orchestrator | 08:06:54.544 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-02-19 08:06:54.546533 | orchestrator | 08:06:54.546 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-02-19 08:06:54.548701 | orchestrator | 08:06:54.548 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-02-19 08:06:54.549819 | orchestrator | 08:06:54.549 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-02-19 08:06:54.556147 | orchestrator | 08:06:54.555 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-02-19 08:06:54.901829 | orchestrator | 08:06:54.901 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=3810ce3b-98be-49e4-82fd-16c6983b9d4c] 2025-02-19 08:06:55.006077 | orchestrator | 08:06:55.005 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=5958d6e6-dee6-48a6-b9c3-b0bb35a7f3ba] 2025-02-19 08:06:55.036105 | orchestrator | 08:06:55.035 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=b27d3615-d2fa-424c-b7ad-86ee160dbbe0] 2025-02-19 08:07:04.545443 | orchestrator | 08:07:04.545 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-02-19 08:07:04.549936 | orchestrator | 08:07:04.549 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-02-19 08:07:04.550857 | orchestrator | 08:07:04.550 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-02-19 08:07:05.057317 | orchestrator | 08:07:05.056 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 40s [id=1f55726f-de7d-4f36-9068-e6ad4a439e97] 2025-02-19 08:07:05.171941 | orchestrator | 08:07:05.171 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 40s [id=0158aa67-ec2c-4b2e-8638-64e27e8cc308] 2025-02-19 08:07:05.332044 | orchestrator | 08:07:05.331 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 40s [id=957d1926-7646-4f67-b4f8-f8f7ed59dd9e] 2025-02-19 08:07:05.353094 | orchestrator | 08:07:05.352 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-02-19 08:07:05.357185 | orchestrator | 08:07:05.356 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-02-19 08:07:05.359078 | orchestrator | 08:07:05.358 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=28043800923122451] 2025-02-19 08:07:05.377504 | orchestrator | 08:07:05.377 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-02-19 08:07:05.380017 | orchestrator | 08:07:05.379 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-02-19 08:07:05.387150 | orchestrator | 08:07:05.379 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-02-19 08:07:05.387280 | orchestrator | 08:07:05.386 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-02-19 08:07:05.389285 | orchestrator | 08:07:05.389 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-02-19 08:07:05.389697 | orchestrator | 08:07:05.389 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-02-19 08:07:05.397800 | orchestrator | 08:07:05.397 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-02-19 08:07:05.403815 | orchestrator | 08:07:05.403 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-02-19 08:07:05.404996 | orchestrator | 08:07:05.404 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-02-19 08:07:10.738606 | orchestrator | 08:07:10.738 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 6s [id=5958d6e6-dee6-48a6-b9c3-b0bb35a7f3ba/6cdb92e8-c898-48ca-adcb-2a30d1567e49] 2025-02-19 08:07:10.739157 | orchestrator | 08:07:10.738 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=b27d3615-d2fa-424c-b7ad-86ee160dbbe0/5c11fa33-d2ef-45ea-bc93-56551b069e33] 2025-02-19 08:07:10.749356 | orchestrator | 08:07:10.749 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-02-19 08:07:10.758179 | orchestrator | 08:07:10.757 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-02-19 08:07:10.761186 | orchestrator | 08:07:10.760 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 6s [id=0158aa67-ec2c-4b2e-8638-64e27e8cc308/933f95c9-b090-4d95-b9b7-90a087e62286] 2025-02-19 08:07:10.769529 | orchestrator | 08:07:10.769 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 6s [id=1f55726f-de7d-4f36-9068-e6ad4a439e97/69806146-708c-4195-b6c7-ec061db9d03d] 2025-02-19 08:07:10.772997 | orchestrator | 08:07:10.772 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 6s [id=3810ce3b-98be-49e4-82fd-16c6983b9d4c/b50482d4-467d-4151-94c3-bb810c8ecc19] 2025-02-19 08:07:10.775173 | orchestrator | 08:07:10.775 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-02-19 08:07:10.778698 | orchestrator | 08:07:10.778 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=957d1926-7646-4f67-b4f8-f8f7ed59dd9e/ae299bec-d23f-4bd0-a551-f66f5e1afde1] 2025-02-19 08:07:10.788834 | orchestrator | 08:07:10.788 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=1f55726f-de7d-4f36-9068-e6ad4a439e97/0c5208c8-9aa1-4e87-9cdb-910770e18a0c] 2025-02-19 08:07:10.792857 | orchestrator | 08:07:10.792 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-02-19 08:07:10.795527 | orchestrator | 08:07:10.795 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 6s [id=0158aa67-ec2c-4b2e-8638-64e27e8cc308/00a01370-945d-463a-a32d-5e52b5234eb4] 2025-02-19 08:07:10.797992 | orchestrator | 08:07:10.797 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-02-19 08:07:10.800089 | orchestrator | 08:07:10.799 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=b27d3615-d2fa-424c-b7ad-86ee160dbbe0/420ab18e-fdcb-4974-b92c-678938c23e9b] 2025-02-19 08:07:10.806847 | orchestrator | 08:07:10.806 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-02-19 08:07:10.810103 | orchestrator | 08:07:10.809 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-02-19 08:07:10.810429 | orchestrator | 08:07:10.810 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=3810ce3b-98be-49e4-82fd-16c6983b9d4c/7ac42676-4a1f-422d-9e47-87a492d5a795] 2025-02-19 08:07:10.811864 | orchestrator | 08:07:10.811 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-02-19 08:07:10.824173 | orchestrator | 08:07:10.824 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-02-19 08:07:16.210530 | orchestrator | 08:07:16.209 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=0158aa67-ec2c-4b2e-8638-64e27e8cc308/eb5d754e-727a-4983-9d71-2a65afff7a52] 2025-02-19 08:07:16.229572 | orchestrator | 08:07:16.229 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 5s [id=1f55726f-de7d-4f36-9068-e6ad4a439e97/923f2b44-0879-4277-a106-844be4b2565d] 2025-02-19 08:07:16.240403 | orchestrator | 08:07:16.239 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=957d1926-7646-4f67-b4f8-f8f7ed59dd9e/116ec19e-6576-4adf-ada1-59164a5d1c9f] 2025-02-19 08:07:16.249218 | orchestrator | 08:07:16.248 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=3810ce3b-98be-49e4-82fd-16c6983b9d4c/0f115ae7-332f-47b5-bfba-4efd1297123a] 2025-02-19 08:07:16.257587 | orchestrator | 08:07:16.257 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=5958d6e6-dee6-48a6-b9c3-b0bb35a7f3ba/06a3a42c-cb57-4c14-955c-f9e446b3a982] 2025-02-19 08:07:16.258844 | orchestrator | 08:07:16.258 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=b27d3615-d2fa-424c-b7ad-86ee160dbbe0/21743850-c155-402b-9a95-271bd8472759] 2025-02-19 08:07:16.267629 | orchestrator | 08:07:16.267 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=5958d6e6-dee6-48a6-b9c3-b0bb35a7f3ba/91d4d525-aaae-41a7-908a-2e5d882c10b9] 2025-02-19 08:07:16.269569 | orchestrator | 08:07:16.269 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=957d1926-7646-4f67-b4f8-f8f7ed59dd9e/d6c08883-a737-4166-bae3-29df7aca0544] 2025-02-19 08:07:20.825720 | orchestrator | 08:07:20.825 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-02-19 08:07:30.830550 | orchestrator | 08:07:30.830 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-02-19 08:07:31.406454 | orchestrator | 08:07:31.405 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=e0079e52-f619-40ff-85cd-519239e302a9] 2025-02-19 08:07:31.425318 | orchestrator | 08:07:31.425 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-02-19 08:07:31.425387 | orchestrator | 08:07:31.425 STDOUT terraform: Outputs: 2025-02-19 08:07:31.425445 | orchestrator | 08:07:31.425 STDOUT terraform: manager_address = 2025-02-19 08:07:31.425454 | orchestrator | 08:07:31.425 STDOUT terraform: private_key = 2025-02-19 08:07:31.622205 | orchestrator | changed 2025-02-19 08:07:31.658145 | 2025-02-19 08:07:31.658271 | TASK [Create infrastructure (stable)] 2025-02-19 08:07:31.773596 | orchestrator | skipping: Conditional result was False 2025-02-19 08:07:31.794691 | 2025-02-19 08:07:31.794863 | TASK [Fetch manager address] 2025-02-19 08:07:42.232473 | orchestrator | ok 2025-02-19 08:07:42.248410 | 2025-02-19 08:07:42.248555 | TASK [Set manager_host address] 2025-02-19 08:07:42.352087 | orchestrator | ok 2025-02-19 08:07:42.364213 | 2025-02-19 08:07:42.364331 | LOOP [Update ansible collections] 2025-02-19 08:07:43.262082 | orchestrator | changed 2025-02-19 08:07:44.153089 | orchestrator | changed 2025-02-19 08:07:44.175267 | 2025-02-19 08:07:44.175433 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-02-19 08:07:54.714263 | orchestrator | ok 2025-02-19 08:07:54.726092 | 2025-02-19 08:07:54.726204 | TASK [Wait a little longer for the manager so that everything is ready] 2025-02-19 08:08:54.771219 | orchestrator | ok 2025-02-19 08:08:54.780689 | 2025-02-19 08:08:54.780807 | TASK [Fetch manager ssh hostkey] 2025-02-19 08:08:55.823117 | orchestrator | Output suppressed because no_log was given 2025-02-19 08:08:55.841897 | 2025-02-19 08:08:55.842121 | TASK [Get ssh keypair from terraform environment] 2025-02-19 08:08:56.392446 | orchestrator | changed 2025-02-19 08:08:56.413300 | 2025-02-19 08:08:56.413495 | TASK [Point out that the following task takes some time and does not give any output] 2025-02-19 08:08:56.467276 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-02-19 08:08:56.479494 | 2025-02-19 08:08:56.479639 | TASK [Run manager part 0] 2025-02-19 08:08:57.388824 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-02-19 08:08:57.438670 | orchestrator | 2025-02-19 08:08:59.472412 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-02-19 08:08:59.472465 | orchestrator | 2025-02-19 08:08:59.472487 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-02-19 08:08:59.472502 | orchestrator | ok: [testbed-manager] 2025-02-19 08:09:01.585362 | orchestrator | 2025-02-19 08:09:01.585496 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-02-19 08:09:01.585528 | orchestrator | 2025-02-19 08:09:01.585544 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-19 08:09:01.585574 | orchestrator | ok: [testbed-manager] 2025-02-19 08:09:02.347172 | orchestrator | 2025-02-19 08:09:02.347240 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-02-19 08:09:02.347260 | orchestrator | ok: [testbed-manager] 2025-02-19 08:09:02.392423 | orchestrator | 2025-02-19 08:09:02.392465 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-02-19 08:09:02.392481 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:09:02.432840 | orchestrator | 2025-02-19 08:09:02.432911 | orchestrator | TASK [Update package cache] **************************************************** 2025-02-19 08:09:02.432932 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:09:02.458280 | orchestrator | 2025-02-19 08:09:02.458334 | orchestrator | TASK [Install required packages] *********************************************** 2025-02-19 08:09:02.458351 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:09:02.488792 | orchestrator | 2025-02-19 08:09:02.488836 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-02-19 08:09:02.488851 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:09:02.516344 | orchestrator | 2025-02-19 08:09:02.516422 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-02-19 08:09:02.516448 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:09:02.550173 | orchestrator | 2025-02-19 08:09:02.550231 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-02-19 08:09:02.550248 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:09:02.578626 | orchestrator | 2025-02-19 08:09:02.578710 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-02-19 08:09:02.578737 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:09:03.450685 | orchestrator | 2025-02-19 08:09:03.450838 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-02-19 08:09:03.450856 | orchestrator | changed: [testbed-manager] 2025-02-19 08:11:27.895612 | orchestrator | 2025-02-19 08:11:27.895709 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-02-19 08:11:27.895749 | orchestrator | changed: [testbed-manager] 2025-02-19 08:12:52.375738 | orchestrator | 2025-02-19 08:12:52.375919 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-02-19 08:12:52.375961 | orchestrator | changed: [testbed-manager] 2025-02-19 08:13:13.471659 | orchestrator | 2025-02-19 08:13:13.471809 | orchestrator | TASK [Install required packages] *********************************************** 2025-02-19 08:13:13.471847 | orchestrator | changed: [testbed-manager] 2025-02-19 08:13:22.695956 | orchestrator | 2025-02-19 08:13:22.696165 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-02-19 08:13:22.696208 | orchestrator | changed: [testbed-manager] 2025-02-19 08:13:22.756198 | orchestrator | 2025-02-19 08:13:22.756318 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-02-19 08:13:22.756377 | orchestrator | ok: [testbed-manager] 2025-02-19 08:13:23.552483 | orchestrator | 2025-02-19 08:13:23.552599 | orchestrator | TASK [Get current user] ******************************************************** 2025-02-19 08:13:23.552644 | orchestrator | ok: [testbed-manager] 2025-02-19 08:13:24.291322 | orchestrator | 2025-02-19 08:13:24.291978 | orchestrator | TASK [Create venv directory] *************************************************** 2025-02-19 08:13:24.292024 | orchestrator | changed: [testbed-manager] 2025-02-19 08:13:31.104152 | orchestrator | 2025-02-19 08:13:31.104259 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-02-19 08:13:31.104295 | orchestrator | changed: [testbed-manager] 2025-02-19 08:13:37.418684 | orchestrator | 2025-02-19 08:13:37.418795 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-02-19 08:13:37.418847 | orchestrator | changed: [testbed-manager] 2025-02-19 08:13:40.213160 | orchestrator | 2025-02-19 08:13:40.213241 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-02-19 08:13:40.213269 | orchestrator | changed: [testbed-manager] 2025-02-19 08:13:42.045914 | orchestrator | 2025-02-19 08:13:42.046075 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-02-19 08:13:42.046122 | orchestrator | changed: [testbed-manager] 2025-02-19 08:13:43.187329 | orchestrator | 2025-02-19 08:13:43.187436 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-02-19 08:13:43.187472 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-02-19 08:13:43.232523 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-02-19 08:13:43.232614 | orchestrator | 2025-02-19 08:13:43.232633 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-02-19 08:13:43.232657 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-02-19 08:13:46.804327 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-02-19 08:13:46.804435 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-02-19 08:13:46.804453 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-02-19 08:13:46.804485 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-02-19 08:13:47.393124 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-02-19 08:13:47.393172 | orchestrator | 2025-02-19 08:13:47.393181 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-02-19 08:13:47.393195 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:07.326503 | orchestrator | 2025-02-19 08:14:07.326618 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-02-19 08:14:07.326654 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-02-19 08:14:09.724543 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-02-19 08:14:09.724644 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-02-19 08:14:09.724665 | orchestrator | 2025-02-19 08:14:09.724685 | orchestrator | TASK [Install local collections] *********************************************** 2025-02-19 08:14:09.724718 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-02-19 08:14:11.219032 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-02-19 08:14:11.219083 | orchestrator | 2025-02-19 08:14:11.219092 | orchestrator | PLAY [Create operator user] **************************************************** 2025-02-19 08:14:11.219101 | orchestrator | 2025-02-19 08:14:11.219108 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-19 08:14:11.219123 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:11.269368 | orchestrator | 2025-02-19 08:14:11.269418 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-02-19 08:14:11.269436 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:11.335505 | orchestrator | 2025-02-19 08:14:11.335576 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-02-19 08:14:11.335605 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:12.149957 | orchestrator | 2025-02-19 08:14:12.150101 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-02-19 08:14:12.150141 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:12.890704 | orchestrator | 2025-02-19 08:14:12.890808 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-02-19 08:14:12.890848 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:14.325350 | orchestrator | 2025-02-19 08:14:14.325390 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-02-19 08:14:14.325403 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-02-19 08:14:15.657267 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-02-19 08:14:15.657326 | orchestrator | 2025-02-19 08:14:15.657336 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-02-19 08:14:15.657352 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:17.464513 | orchestrator | 2025-02-19 08:14:17.464736 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-02-19 08:14:17.464766 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-02-19 08:14:18.030979 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-02-19 08:14:18.031078 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-02-19 08:14:18.031099 | orchestrator | 2025-02-19 08:14:18.031116 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-02-19 08:14:18.031146 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:18.099152 | orchestrator | 2025-02-19 08:14:18.099276 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-02-19 08:14:18.099319 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:14:19.018545 | orchestrator | 2025-02-19 08:14:19.018598 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-02-19 08:14:19.018619 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-19 08:14:19.054940 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:19.054991 | orchestrator | 2025-02-19 08:14:19.055000 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-02-19 08:14:19.055015 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:14:19.091960 | orchestrator | 2025-02-19 08:14:19.092007 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-02-19 08:14:19.092024 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:14:19.121584 | orchestrator | 2025-02-19 08:14:19.121630 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-02-19 08:14:19.121645 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:14:19.162181 | orchestrator | 2025-02-19 08:14:19.162228 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-02-19 08:14:19.162243 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:14:19.937232 | orchestrator | 2025-02-19 08:14:19.937374 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-02-19 08:14:19.937393 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:21.373216 | orchestrator | 2025-02-19 08:14:21.373304 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-02-19 08:14:21.373318 | orchestrator | 2025-02-19 08:14:21.373327 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-19 08:14:21.373348 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:22.390169 | orchestrator | 2025-02-19 08:14:22.390480 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-02-19 08:14:22.390502 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:22.509486 | orchestrator | 2025-02-19 08:14:22.509605 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:14:22.509614 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-02-19 08:14:22.509620 | orchestrator | 2025-02-19 08:14:22.748638 | orchestrator | changed 2025-02-19 08:14:22.767974 | 2025-02-19 08:14:22.768102 | TASK [Point out that the log in on the manager is now possible] 2025-02-19 08:14:22.818525 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-02-19 08:14:22.829658 | 2025-02-19 08:14:22.829764 | TASK [Point out that the following task takes some time and does not give any output] 2025-02-19 08:14:22.880256 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-02-19 08:14:22.921064 | 2025-02-19 08:14:22.921192 | TASK [Run manager part 1 + 2] 2025-02-19 08:14:23.771061 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-02-19 08:14:23.824582 | orchestrator | 2025-02-19 08:14:26.358825 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-02-19 08:14:26.358931 | orchestrator | 2025-02-19 08:14:26.358968 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-19 08:14:26.358996 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:26.393679 | orchestrator | 2025-02-19 08:14:26.393754 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-02-19 08:14:26.393777 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:14:26.441338 | orchestrator | 2025-02-19 08:14:26.441419 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-02-19 08:14:26.441443 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:26.488922 | orchestrator | 2025-02-19 08:14:26.489047 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-19 08:14:26.489099 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:26.565107 | orchestrator | 2025-02-19 08:14:26.565174 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-19 08:14:26.565192 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:26.629778 | orchestrator | 2025-02-19 08:14:26.629850 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-19 08:14:26.629869 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:26.675436 | orchestrator | 2025-02-19 08:14:26.675519 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-19 08:14:26.675544 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-02-19 08:14:27.407349 | orchestrator | 2025-02-19 08:14:27.407994 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-19 08:14:27.408033 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:27.463788 | orchestrator | 2025-02-19 08:14:27.463949 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-19 08:14:27.463991 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:14:28.868939 | orchestrator | 2025-02-19 08:14:28.869065 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-19 08:14:28.869113 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:29.437929 | orchestrator | 2025-02-19 08:14:29.438089 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-19 08:14:29.438131 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:30.650746 | orchestrator | 2025-02-19 08:14:30.650854 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-19 08:14:30.650913 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:43.770290 | orchestrator | 2025-02-19 08:14:43.770379 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-19 08:14:43.770411 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:44.463131 | orchestrator | 2025-02-19 08:14:44.463238 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-02-19 08:14:44.463273 | orchestrator | ok: [testbed-manager] 2025-02-19 08:14:44.518797 | orchestrator | 2025-02-19 08:14:44.518949 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-02-19 08:14:44.519000 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:14:45.480509 | orchestrator | 2025-02-19 08:14:45.480588 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-02-19 08:14:45.480615 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:46.456647 | orchestrator | 2025-02-19 08:14:46.456763 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-02-19 08:14:46.456798 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:47.031706 | orchestrator | 2025-02-19 08:14:47.031780 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-02-19 08:14:47.031805 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:47.071709 | orchestrator | 2025-02-19 08:14:47.071810 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-02-19 08:14:47.071841 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-02-19 08:14:50.136991 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-02-19 08:14:50.137133 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-02-19 08:14:50.137167 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-02-19 08:14:50.137211 | orchestrator | changed: [testbed-manager] 2025-02-19 08:14:59.246338 | orchestrator | 2025-02-19 08:14:59.246482 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-02-19 08:14:59.246536 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-02-19 08:15:00.310442 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-02-19 08:15:00.310545 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-02-19 08:15:00.310565 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-02-19 08:15:00.310581 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-02-19 08:15:00.310596 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-02-19 08:15:00.310611 | orchestrator | 2025-02-19 08:15:00.310626 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-02-19 08:15:00.310674 | orchestrator | changed: [testbed-manager] 2025-02-19 08:15:00.348500 | orchestrator | 2025-02-19 08:15:00.348646 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-02-19 08:15:00.348690 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:15:03.659466 | orchestrator | 2025-02-19 08:15:03.660301 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-02-19 08:15:03.660352 | orchestrator | changed: [testbed-manager] 2025-02-19 08:15:03.705615 | orchestrator | 2025-02-19 08:15:03.705678 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-02-19 08:15:03.705703 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:16:41.673248 | orchestrator | 2025-02-19 08:16:41.673302 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-02-19 08:16:41.673321 | orchestrator | changed: [testbed-manager] 2025-02-19 08:16:42.881405 | orchestrator | 2025-02-19 08:16:42.881483 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-19 08:16:42.881512 | orchestrator | ok: [testbed-manager] 2025-02-19 08:16:42.992207 | orchestrator | 2025-02-19 08:16:42.992438 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:16:42.992467 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-02-19 08:16:42.992483 | orchestrator | 2025-02-19 08:16:43.057052 | orchestrator | changed 2025-02-19 08:16:43.074837 | 2025-02-19 08:16:43.075011 | TASK [Reboot manager] 2025-02-19 08:16:44.634392 | orchestrator | changed 2025-02-19 08:16:44.654549 | 2025-02-19 08:16:44.654744 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-02-19 08:17:01.074439 | orchestrator | ok 2025-02-19 08:17:01.085243 | 2025-02-19 08:17:01.085355 | TASK [Wait a little longer for the manager so that everything is ready] 2025-02-19 08:18:01.132067 | orchestrator | ok 2025-02-19 08:18:01.143843 | 2025-02-19 08:18:01.143979 | TASK [Deploy manager + bootstrap nodes] 2025-02-19 08:18:03.747079 | orchestrator | 2025-02-19 08:18:03.750929 | orchestrator | # DEPLOY MANAGER 2025-02-19 08:18:03.750992 | orchestrator | 2025-02-19 08:18:03.751011 | orchestrator | + set -e 2025-02-19 08:18:03.751059 | orchestrator | + echo 2025-02-19 08:18:03.751079 | orchestrator | + echo '# DEPLOY MANAGER' 2025-02-19 08:18:03.751096 | orchestrator | + echo 2025-02-19 08:18:03.751121 | orchestrator | + cat /opt/manager-vars.sh 2025-02-19 08:18:03.751160 | orchestrator | export NUMBER_OF_NODES=6 2025-02-19 08:18:03.751179 | orchestrator | 2025-02-19 08:18:03.751194 | orchestrator | export CEPH_VERSION=quincy 2025-02-19 08:18:03.751208 | orchestrator | export CONFIGURATION_VERSION=main 2025-02-19 08:18:03.751222 | orchestrator | export MANAGER_VERSION=latest 2025-02-19 08:18:03.751236 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-02-19 08:18:03.751250 | orchestrator | 2025-02-19 08:18:03.751265 | orchestrator | export ARA=false 2025-02-19 08:18:03.751279 | orchestrator | export TEMPEST=false 2025-02-19 08:18:03.751293 | orchestrator | export IS_ZUUL=true 2025-02-19 08:18:03.751307 | orchestrator | 2025-02-19 08:18:03.751320 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-02-19 08:18:03.751335 | orchestrator | export EXTERNAL_API=false 2025-02-19 08:18:03.751349 | orchestrator | 2025-02-19 08:18:03.751362 | orchestrator | export IMAGE_USER=ubuntu 2025-02-19 08:18:03.751376 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-02-19 08:18:03.751391 | orchestrator | 2025-02-19 08:18:03.751405 | orchestrator | export CEPH_STACK=ceph-ansible 2025-02-19 08:18:03.751422 | orchestrator | 2025-02-19 08:18:03.752951 | orchestrator | + echo 2025-02-19 08:18:03.753049 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-19 08:18:03.753083 | orchestrator | ++ export INTERACTIVE=false 2025-02-19 08:18:03.753673 | orchestrator | ++ INTERACTIVE=false 2025-02-19 08:18:03.753703 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-19 08:18:03.753721 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-19 08:18:03.753728 | orchestrator | + source /opt/manager-vars.sh 2025-02-19 08:18:03.753735 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-19 08:18:03.753742 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-19 08:18:03.753748 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-19 08:18:03.753779 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-19 08:18:03.753786 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-19 08:18:03.753793 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-19 08:18:03.753805 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-19 08:18:03.753812 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-19 08:18:03.753819 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-19 08:18:03.753825 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-19 08:18:03.753832 | orchestrator | ++ export ARA=false 2025-02-19 08:18:03.753839 | orchestrator | ++ ARA=false 2025-02-19 08:18:03.753849 | orchestrator | ++ export TEMPEST=false 2025-02-19 08:18:03.753860 | orchestrator | ++ TEMPEST=false 2025-02-19 08:18:03.753870 | orchestrator | ++ export IS_ZUUL=true 2025-02-19 08:18:03.753882 | orchestrator | ++ IS_ZUUL=true 2025-02-19 08:18:03.753890 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-02-19 08:18:03.753903 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-02-19 08:18:03.798556 | orchestrator | ++ export EXTERNAL_API=false 2025-02-19 08:18:03.798650 | orchestrator | ++ EXTERNAL_API=false 2025-02-19 08:18:03.798657 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-19 08:18:03.798662 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-19 08:18:03.798668 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-19 08:18:03.798695 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-19 08:18:03.798703 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-19 08:18:03.798709 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-19 08:18:03.798716 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-02-19 08:18:03.798740 | orchestrator | + docker version 2025-02-19 08:18:04.097905 | orchestrator | Client: Docker Engine - Community 2025-02-19 08:18:04.098080 | orchestrator | Version: 27.4.1 2025-02-19 08:18:04.098111 | orchestrator | API version: 1.47 2025-02-19 08:18:04.098127 | orchestrator | Go version: go1.22.10 2025-02-19 08:18:04.098141 | orchestrator | Git commit: b9d17ea 2025-02-19 08:18:04.098155 | orchestrator | Built: Tue Dec 17 15:45:46 2024 2025-02-19 08:18:04.098171 | orchestrator | OS/Arch: linux/amd64 2025-02-19 08:18:04.098185 | orchestrator | Context: default 2025-02-19 08:18:04.098199 | orchestrator | 2025-02-19 08:18:04.098213 | orchestrator | Server: Docker Engine - Community 2025-02-19 08:18:04.098227 | orchestrator | Engine: 2025-02-19 08:18:04.098241 | orchestrator | Version: 27.4.1 2025-02-19 08:18:04.098255 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-02-19 08:18:04.098268 | orchestrator | Go version: go1.22.10 2025-02-19 08:18:04.098284 | orchestrator | Git commit: c710b88 2025-02-19 08:18:04.098336 | orchestrator | Built: Tue Dec 17 15:45:46 2024 2025-02-19 08:18:04.098351 | orchestrator | OS/Arch: linux/amd64 2025-02-19 08:18:04.098365 | orchestrator | Experimental: false 2025-02-19 08:18:04.098379 | orchestrator | containerd: 2025-02-19 08:18:04.098393 | orchestrator | Version: 1.7.25 2025-02-19 08:18:04.098407 | orchestrator | GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb 2025-02-19 08:18:04.098421 | orchestrator | runc: 2025-02-19 08:18:04.098444 | orchestrator | Version: 1.2.4 2025-02-19 08:18:04.100588 | orchestrator | GitCommit: v1.2.4-0-g6c52b3f 2025-02-19 08:18:04.100616 | orchestrator | docker-init: 2025-02-19 08:18:04.100631 | orchestrator | Version: 0.19.0 2025-02-19 08:18:04.100645 | orchestrator | GitCommit: de40ad0 2025-02-19 08:18:04.100665 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-02-19 08:18:04.109529 | orchestrator | + set -e 2025-02-19 08:18:04.109642 | orchestrator | + source /opt/manager-vars.sh 2025-02-19 08:18:04.109665 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-19 08:18:04.109681 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-19 08:18:04.109695 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-19 08:18:04.109709 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-19 08:18:04.109724 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-19 08:18:04.109740 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-19 08:18:04.109785 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-19 08:18:04.109801 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-19 08:18:04.109815 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-19 08:18:04.109829 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-19 08:18:04.109843 | orchestrator | ++ export ARA=false 2025-02-19 08:18:04.109857 | orchestrator | ++ ARA=false 2025-02-19 08:18:04.109871 | orchestrator | ++ export TEMPEST=false 2025-02-19 08:18:04.109885 | orchestrator | ++ TEMPEST=false 2025-02-19 08:18:04.109898 | orchestrator | ++ export IS_ZUUL=true 2025-02-19 08:18:04.109912 | orchestrator | ++ IS_ZUUL=true 2025-02-19 08:18:04.109927 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-02-19 08:18:04.109942 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-02-19 08:18:04.109984 | orchestrator | ++ export EXTERNAL_API=false 2025-02-19 08:18:04.110005 | orchestrator | ++ EXTERNAL_API=false 2025-02-19 08:18:04.110071 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-19 08:18:04.110087 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-19 08:18:04.110108 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-19 08:18:04.110122 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-19 08:18:04.110136 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-19 08:18:04.110150 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-19 08:18:04.110164 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-19 08:18:04.110178 | orchestrator | ++ export INTERACTIVE=false 2025-02-19 08:18:04.110192 | orchestrator | ++ INTERACTIVE=false 2025-02-19 08:18:04.110206 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-19 08:18:04.110219 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-19 08:18:04.110233 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-19 08:18:04.110250 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-19 08:18:04.110273 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh quincy 2025-02-19 08:18:04.113681 | orchestrator | + set -e 2025-02-19 08:18:04.113950 | orchestrator | + VERSION=quincy 2025-02-19 08:18:04.114319 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-02-19 08:18:04.121839 | orchestrator | + [[ -n ceph_version: quincy ]] 2025-02-19 08:18:04.128006 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: quincy/g' /opt/configuration/environments/manager/configuration.yml 2025-02-19 08:18:04.128110 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.1 2025-02-19 08:18:04.135872 | orchestrator | + set -e 2025-02-19 08:18:04.137070 | orchestrator | + VERSION=2024.1 2025-02-19 08:18:04.137116 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-02-19 08:18:04.141052 | orchestrator | + [[ -n openstack_version: 2024.1 ]] 2025-02-19 08:18:04.147389 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.1/g' /opt/configuration/environments/manager/configuration.yml 2025-02-19 08:18:04.147506 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-02-19 08:18:04.148169 | orchestrator | ++ semver latest 7.0.0 2025-02-19 08:18:04.216664 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-19 08:18:04.263926 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-19 08:18:04.264044 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-02-19 08:18:04.264062 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-02-19 08:18:04.264125 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-02-19 08:18:04.264916 | orchestrator | + source /opt/venv/bin/activate 2025-02-19 08:18:04.266343 | orchestrator | ++ deactivate nondestructive 2025-02-19 08:18:04.266375 | orchestrator | ++ '[' -n '' ']' 2025-02-19 08:18:04.266390 | orchestrator | ++ '[' -n '' ']' 2025-02-19 08:18:04.266408 | orchestrator | ++ hash -r 2025-02-19 08:18:04.266422 | orchestrator | ++ '[' -n '' ']' 2025-02-19 08:18:04.266436 | orchestrator | ++ unset VIRTUAL_ENV 2025-02-19 08:18:04.266450 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-02-19 08:18:04.266463 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-02-19 08:18:04.266477 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-02-19 08:18:04.266491 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-02-19 08:18:04.266504 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-02-19 08:18:04.266518 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-02-19 08:18:04.266532 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-19 08:18:04.266551 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-19 08:18:05.892404 | orchestrator | ++ export PATH 2025-02-19 08:18:05.892600 | orchestrator | ++ '[' -n '' ']' 2025-02-19 08:18:05.892625 | orchestrator | ++ '[' -z '' ']' 2025-02-19 08:18:05.892640 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-02-19 08:18:05.892655 | orchestrator | ++ PS1='(venv) ' 2025-02-19 08:18:05.892669 | orchestrator | ++ export PS1 2025-02-19 08:18:05.892684 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-02-19 08:18:05.892699 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-02-19 08:18:05.892713 | orchestrator | ++ hash -r 2025-02-19 08:18:05.892728 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-02-19 08:18:05.892789 | orchestrator | 2025-02-19 08:18:06.519199 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-02-19 08:18:06.519328 | orchestrator | 2025-02-19 08:18:06.519344 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-19 08:18:06.519368 | orchestrator | ok: [testbed-manager] 2025-02-19 08:18:07.613904 | orchestrator | 2025-02-19 08:18:07.614098 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-02-19 08:18:07.614145 | orchestrator | changed: [testbed-manager] 2025-02-19 08:18:10.271502 | orchestrator | 2025-02-19 08:18:10.271651 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-02-19 08:18:10.271673 | orchestrator | 2025-02-19 08:18:10.271688 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-19 08:18:10.271722 | orchestrator | ok: [testbed-manager] 2025-02-19 08:18:16.793626 | orchestrator | 2025-02-19 08:18:16.793798 | orchestrator | TASK [Pull images] ************************************************************* 2025-02-19 08:18:16.793840 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-02-19 08:19:13.041177 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-02-19 08:19:13.041278 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:quincy) 2025-02-19 08:19:13.041289 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-02-19 08:19:13.041298 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.1) 2025-02-19 08:19:13.041306 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.2-alpine) 2025-02-19 08:19:13.041315 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.10) 2025-02-19 08:19:13.041323 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-02-19 08:19:13.041331 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-02-19 08:19:13.041339 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-netbox:latest) 2025-02-19 08:19:13.041346 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-02-19 08:19:13.041354 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.3.3) 2025-02-19 08:19:13.041361 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.4) 2025-02-19 08:19:13.041386 | orchestrator | 2025-02-19 08:19:13.041395 | orchestrator | TASK [Check status] ************************************************************ 2025-02-19 08:19:13.041415 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-19 08:19:13.041424 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-02-19 08:19:13.041432 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-02-19 08:19:13.041440 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-02-19 08:19:13.041449 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j635014319702.1522', 'results_file': '/home/dragon/.ansible_async/j635014319702.1522', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041465 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j851098753553.1547', 'results_file': '/home/dragon/.ansible_async/j851098753553.1547', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041473 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j55413577419.1572', 'results_file': '/home/dragon/.ansible_async/j55413577419.1572', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:quincy', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041485 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j990062546569.1604', 'results_file': '/home/dragon/.ansible_async/j990062546569.1604', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041496 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-19 08:19:13.041504 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j989723142575.1637', 'results_file': '/home/dragon/.ansible_async/j989723142575.1637', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.1', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041512 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j9785995383.1670', 'results_file': '/home/dragon/.ansible_async/j9785995383.1670', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.2-alpine', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041519 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j528035697880.1702', 'results_file': '/home/dragon/.ansible_async/j528035697880.1702', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.10', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041527 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j330823434432.1736', 'results_file': '/home/dragon/.ansible_async/j330823434432.1736', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041534 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j281332101969.1771', 'results_file': '/home/dragon/.ansible_async/j281332101969.1771', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041544 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j272590092103.1803', 'results_file': '/home/dragon/.ansible_async/j272590092103.1803', 'changed': True, 'item': 'registry.osism.tech/osism/osism-netbox:latest', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041552 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j998952094309.1842', 'results_file': '/home/dragon/.ansible_async/j998952094309.1842', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041565 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j544181853503.1874', 'results_file': '/home/dragon/.ansible_async/j544181853503.1874', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.3.3', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041572 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j605092365831.1900', 'results_file': '/home/dragon/.ansible_async/j605092365831.1900', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.4', 'ansible_loop_var': 'item'}) 2025-02-19 08:19:13.041580 | orchestrator | 2025-02-19 08:19:13.041592 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-02-19 08:19:13.104242 | orchestrator | ok: [testbed-manager] 2025-02-19 08:19:13.588298 | orchestrator | 2025-02-19 08:19:13.588428 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-02-19 08:19:13.588473 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:13.929766 | orchestrator | 2025-02-19 08:19:13.929888 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-02-19 08:19:13.929926 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:14.270544 | orchestrator | 2025-02-19 08:19:14.270663 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-02-19 08:19:14.270784 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:14.331203 | orchestrator | 2025-02-19 08:19:14.332036 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-02-19 08:19:14.332089 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:19:14.700977 | orchestrator | 2025-02-19 08:19:14.701086 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-02-19 08:19:14.701119 | orchestrator | ok: [testbed-manager] 2025-02-19 08:19:14.889921 | orchestrator | 2025-02-19 08:19:14.890122 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-02-19 08:19:14.890164 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:19:18.047635 | orchestrator | 2025-02-19 08:19:18.047810 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-02-19 08:19:18.047832 | orchestrator | 2025-02-19 08:19:18.047848 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-19 08:19:18.047879 | orchestrator | ok: [testbed-manager] 2025-02-19 08:19:18.278075 | orchestrator | 2025-02-19 08:19:18.278204 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-02-19 08:19:18.278244 | orchestrator | 2025-02-19 08:19:18.377072 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-02-19 08:19:18.377173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-02-19 08:19:19.508304 | orchestrator | 2025-02-19 08:19:19.508511 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-02-19 08:19:19.508550 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-02-19 08:19:21.417552 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-02-19 08:19:21.417677 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-02-19 08:19:21.417763 | orchestrator | 2025-02-19 08:19:21.417782 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-02-19 08:19:21.417816 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-02-19 08:19:22.125079 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-02-19 08:19:22.125191 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-02-19 08:19:22.125209 | orchestrator | 2025-02-19 08:19:22.125226 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-02-19 08:19:22.125257 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-19 08:19:22.817120 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:22.817246 | orchestrator | 2025-02-19 08:19:22.817267 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-02-19 08:19:22.817303 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-19 08:19:22.909606 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:22.909803 | orchestrator | 2025-02-19 08:19:22.909835 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-02-19 08:19:22.909877 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:19:23.291202 | orchestrator | 2025-02-19 08:19:23.291305 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-02-19 08:19:23.291336 | orchestrator | ok: [testbed-manager] 2025-02-19 08:19:23.392648 | orchestrator | 2025-02-19 08:19:23.392826 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-02-19 08:19:23.392865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-02-19 08:19:24.504256 | orchestrator | 2025-02-19 08:19:24.505145 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-02-19 08:19:24.505211 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:25.382106 | orchestrator | 2025-02-19 08:19:25.382202 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-02-19 08:19:25.382226 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:28.433366 | orchestrator | 2025-02-19 08:19:28.433474 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-02-19 08:19:28.433499 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:28.734132 | orchestrator | 2025-02-19 08:19:28.734254 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-02-19 08:19:28.734292 | orchestrator | 2025-02-19 08:19:28.855403 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-02-19 08:19:28.855544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-02-19 08:19:31.952841 | orchestrator | 2025-02-19 08:19:31.952978 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-02-19 08:19:31.953017 | orchestrator | ok: [testbed-manager] 2025-02-19 08:19:32.124410 | orchestrator | 2025-02-19 08:19:32.124505 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-02-19 08:19:32.124530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-02-19 08:19:33.282642 | orchestrator | 2025-02-19 08:19:33.282825 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-02-19 08:19:33.282865 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-02-19 08:19:33.404403 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-02-19 08:19:33.404516 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-02-19 08:19:33.404534 | orchestrator | 2025-02-19 08:19:33.404549 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-02-19 08:19:33.404582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-02-19 08:19:34.043238 | orchestrator | 2025-02-19 08:19:34.043331 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-02-19 08:19:34.043355 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-02-19 08:19:34.749869 | orchestrator | 2025-02-19 08:19:34.749965 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-02-19 08:19:34.749992 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-19 08:19:35.166817 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:35.166957 | orchestrator | 2025-02-19 08:19:35.166984 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-02-19 08:19:35.167026 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:35.542659 | orchestrator | 2025-02-19 08:19:35.542816 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-02-19 08:19:35.542842 | orchestrator | ok: [testbed-manager] 2025-02-19 08:19:35.605419 | orchestrator | 2025-02-19 08:19:35.605524 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-02-19 08:19:35.605550 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:19:36.257376 | orchestrator | 2025-02-19 08:19:36.257455 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-02-19 08:19:36.257487 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:36.370254 | orchestrator | 2025-02-19 08:19:36.370358 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-02-19 08:19:36.370388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-02-19 08:19:37.159482 | orchestrator | 2025-02-19 08:19:37.159610 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-02-19 08:19:37.159661 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-02-19 08:19:37.905851 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-02-19 08:19:37.905935 | orchestrator | 2025-02-19 08:19:37.905944 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-02-19 08:19:37.905960 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-02-19 08:19:38.624910 | orchestrator | 2025-02-19 08:19:38.625057 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-02-19 08:19:38.625098 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:38.700625 | orchestrator | 2025-02-19 08:19:38.700742 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-02-19 08:19:38.700761 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:19:39.350335 | orchestrator | 2025-02-19 08:19:39.350442 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-02-19 08:19:39.350468 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:41.297383 | orchestrator | 2025-02-19 08:19:41.297490 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-02-19 08:19:41.297513 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-19 08:19:47.516295 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-19 08:19:47.516397 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-19 08:19:47.516409 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:47.516418 | orchestrator | 2025-02-19 08:19:47.516427 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-02-19 08:19:47.516448 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-02-19 08:19:48.201909 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-02-19 08:19:48.202009 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-02-19 08:19:48.202076 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-02-19 08:19:48.202088 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-02-19 08:19:48.202100 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-02-19 08:19:48.202110 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-02-19 08:19:48.202120 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-02-19 08:19:48.202130 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-02-19 08:19:48.202141 | orchestrator | changed: [testbed-manager] => (item=users) 2025-02-19 08:19:48.202151 | orchestrator | 2025-02-19 08:19:48.202161 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-02-19 08:19:48.202187 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-02-19 08:19:48.382105 | orchestrator | 2025-02-19 08:19:48.382224 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-02-19 08:19:48.382262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-02-19 08:19:49.127979 | orchestrator | 2025-02-19 08:19:49.128087 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-02-19 08:19:49.128115 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:49.793364 | orchestrator | 2025-02-19 08:19:49.793476 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-02-19 08:19:49.793508 | orchestrator | ok: [testbed-manager] 2025-02-19 08:19:50.582324 | orchestrator | 2025-02-19 08:19:50.582454 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-02-19 08:19:50.582507 | orchestrator | changed: [testbed-manager] 2025-02-19 08:19:52.906891 | orchestrator | 2025-02-19 08:19:52.907008 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-02-19 08:19:52.907038 | orchestrator | ok: [testbed-manager] 2025-02-19 08:19:53.881315 | orchestrator | 2025-02-19 08:19:53.881471 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-02-19 08:19:53.881525 | orchestrator | ok: [testbed-manager] 2025-02-19 08:20:16.159265 | orchestrator | 2025-02-19 08:20:16.160128 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-02-19 08:20:16.160187 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-02-19 08:20:16.244930 | orchestrator | ok: [testbed-manager] 2025-02-19 08:20:16.245078 | orchestrator | 2025-02-19 08:20:16.245108 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-02-19 08:20:16.245290 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:20:16.316791 | orchestrator | 2025-02-19 08:20:16.316913 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-02-19 08:20:16.316933 | orchestrator | 2025-02-19 08:20:16.316949 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-02-19 08:20:16.316981 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:20:16.406159 | orchestrator | 2025-02-19 08:20:16.406273 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-02-19 08:20:16.406308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-02-19 08:20:17.341931 | orchestrator | 2025-02-19 08:20:17.342111 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-02-19 08:20:17.342152 | orchestrator | ok: [testbed-manager] 2025-02-19 08:20:17.441856 | orchestrator | 2025-02-19 08:20:17.441972 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-02-19 08:20:17.442009 | orchestrator | ok: [testbed-manager] 2025-02-19 08:20:17.495730 | orchestrator | 2025-02-19 08:20:17.495816 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-02-19 08:20:17.495847 | orchestrator | ok: [testbed-manager] => { 2025-02-19 08:20:18.253895 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-02-19 08:20:18.254087 | orchestrator | } 2025-02-19 08:20:18.254114 | orchestrator | 2025-02-19 08:20:18.254131 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-02-19 08:20:18.254164 | orchestrator | ok: [testbed-manager] 2025-02-19 08:20:19.313408 | orchestrator | 2025-02-19 08:20:19.313531 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-02-19 08:20:19.313569 | orchestrator | ok: [testbed-manager] 2025-02-19 08:20:19.411579 | orchestrator | 2025-02-19 08:20:19.411725 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-02-19 08:20:19.411761 | orchestrator | ok: [testbed-manager] 2025-02-19 08:20:19.482957 | orchestrator | 2025-02-19 08:20:19.483081 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-02-19 08:20:19.483117 | orchestrator | ok: [testbed-manager] => { 2025-02-19 08:20:19.558377 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-02-19 08:20:19.558485 | orchestrator | } 2025-02-19 08:20:19.558507 | orchestrator | 2025-02-19 08:20:19.558526 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-02-19 08:20:19.558563 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:20:19.627834 | orchestrator | 2025-02-19 08:20:19.627933 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-02-19 08:20:19.627960 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:20:19.701801 | orchestrator | 2025-02-19 08:20:19.701902 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-02-19 08:20:19.701935 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:20:19.778311 | orchestrator | 2025-02-19 08:20:19.778403 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-02-19 08:20:19.778434 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:20:19.847410 | orchestrator | 2025-02-19 08:20:19.847518 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-02-19 08:20:19.847550 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:20:19.968844 | orchestrator | 2025-02-19 08:20:19.968954 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-02-19 08:20:19.968989 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:20:21.372002 | orchestrator | 2025-02-19 08:20:21.372142 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-02-19 08:20:21.372191 | orchestrator | changed: [testbed-manager] 2025-02-19 08:20:21.488571 | orchestrator | 2025-02-19 08:20:21.488809 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-02-19 08:20:21.488851 | orchestrator | ok: [testbed-manager] 2025-02-19 08:21:21.561563 | orchestrator | 2025-02-19 08:21:21.561778 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-02-19 08:21:21.561823 | orchestrator | Pausing for 60 seconds 2025-02-19 08:21:21.680637 | orchestrator | changed: [testbed-manager] 2025-02-19 08:21:21.680831 | orchestrator | 2025-02-19 08:21:21.680851 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-02-19 08:21:21.680881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-02-19 08:24:53.269091 | orchestrator | 2025-02-19 08:24:53.269234 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-02-19 08:24:53.269276 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-02-19 08:24:55.713947 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-02-19 08:24:55.714069 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-02-19 08:24:55.714081 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-02-19 08:24:55.714088 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-02-19 08:24:55.714095 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-02-19 08:24:55.714101 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-02-19 08:24:55.714107 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-02-19 08:24:55.714113 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-02-19 08:24:55.714119 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-02-19 08:24:55.714125 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-02-19 08:24:55.714131 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-02-19 08:24:55.714136 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-02-19 08:24:55.714142 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-02-19 08:24:55.714148 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-02-19 08:24:55.714154 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-02-19 08:24:55.714160 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-02-19 08:24:55.714166 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-02-19 08:24:55.714171 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-02-19 08:24:55.714177 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-02-19 08:24:55.714183 | orchestrator | changed: [testbed-manager] 2025-02-19 08:24:55.714190 | orchestrator | 2025-02-19 08:24:55.714197 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-02-19 08:24:55.714225 | orchestrator | 2025-02-19 08:24:55.714232 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-19 08:24:55.714248 | orchestrator | ok: [testbed-manager] 2025-02-19 08:24:55.846992 | orchestrator | 2025-02-19 08:24:55.847138 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-02-19 08:24:55.847177 | orchestrator | 2025-02-19 08:24:55.917432 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-02-19 08:24:55.917572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-02-19 08:24:57.745617 | orchestrator | 2025-02-19 08:24:57.745806 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-02-19 08:24:57.745842 | orchestrator | ok: [testbed-manager] 2025-02-19 08:24:57.803626 | orchestrator | 2025-02-19 08:24:57.803757 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-02-19 08:24:57.803789 | orchestrator | ok: [testbed-manager] 2025-02-19 08:24:57.925102 | orchestrator | 2025-02-19 08:24:57.925222 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-02-19 08:24:57.925260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-02-19 08:25:00.845310 | orchestrator | 2025-02-19 08:25:00.845445 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-02-19 08:25:00.845485 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-02-19 08:25:01.532147 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-02-19 08:25:01.532231 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-02-19 08:25:01.532240 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-02-19 08:25:01.532246 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-02-19 08:25:01.532253 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-02-19 08:25:01.532259 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-02-19 08:25:01.532279 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-02-19 08:25:01.532285 | orchestrator | 2025-02-19 08:25:01.532292 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-02-19 08:25:01.532309 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:01.622317 | orchestrator | 2025-02-19 08:25:01.622454 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-02-19 08:25:01.622498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-02-19 08:25:02.929506 | orchestrator | 2025-02-19 08:25:02.929612 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-02-19 08:25:02.929680 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-02-19 08:25:03.594220 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-02-19 08:25:03.594350 | orchestrator | 2025-02-19 08:25:03.594371 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-02-19 08:25:03.594406 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:03.664213 | orchestrator | 2025-02-19 08:25:03.664355 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-02-19 08:25:03.664397 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:25:03.733458 | orchestrator | 2025-02-19 08:25:03.733566 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-02-19 08:25:03.733599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-02-19 08:25:05.132156 | orchestrator | 2025-02-19 08:25:05.132286 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-02-19 08:25:05.132324 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-19 08:25:05.798514 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-19 08:25:05.798693 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:05.798718 | orchestrator | 2025-02-19 08:25:05.798734 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-02-19 08:25:05.798795 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:05.905321 | orchestrator | 2025-02-19 08:25:05.906291 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-02-19 08:25:05.906360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-02-19 08:25:06.516405 | orchestrator | 2025-02-19 08:25:06.516533 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-02-19 08:25:06.516571 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-19 08:25:07.178620 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:07.178824 | orchestrator | 2025-02-19 08:25:07.178862 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-02-19 08:25:07.178909 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:07.294262 | orchestrator | 2025-02-19 08:25:07.294381 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-02-19 08:25:07.294419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-02-19 08:25:07.926825 | orchestrator | 2025-02-19 08:25:07.926925 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-02-19 08:25:07.926954 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:08.359894 | orchestrator | 2025-02-19 08:25:08.360049 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-02-19 08:25:08.360091 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:09.578888 | orchestrator | 2025-02-19 08:25:09.578983 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-02-19 08:25:09.579008 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-02-19 08:25:10.235312 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-02-19 08:25:10.235460 | orchestrator | 2025-02-19 08:25:10.235492 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-02-19 08:25:10.235538 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:10.588080 | orchestrator | 2025-02-19 08:25:10.588186 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-02-19 08:25:10.588210 | orchestrator | ok: [testbed-manager] 2025-02-19 08:25:10.684722 | orchestrator | 2025-02-19 08:25:10.684800 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-02-19 08:25:10.684820 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:25:11.302831 | orchestrator | 2025-02-19 08:25:11.302964 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-02-19 08:25:11.303000 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:11.377772 | orchestrator | 2025-02-19 08:25:11.377882 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-02-19 08:25:11.377927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-02-19 08:25:11.440988 | orchestrator | 2025-02-19 08:25:11.441106 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-02-19 08:25:11.441138 | orchestrator | ok: [testbed-manager] 2025-02-19 08:25:13.529176 | orchestrator | 2025-02-19 08:25:13.529304 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-02-19 08:25:13.529344 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-02-19 08:25:14.251907 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-02-19 08:25:14.252032 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-02-19 08:25:14.252052 | orchestrator | 2025-02-19 08:25:14.252067 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-02-19 08:25:14.252099 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:14.332662 | orchestrator | 2025-02-19 08:25:14.332778 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-02-19 08:25:14.332813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-02-19 08:25:14.390904 | orchestrator | 2025-02-19 08:25:14.391050 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-02-19 08:25:14.391145 | orchestrator | ok: [testbed-manager] 2025-02-19 08:25:15.130411 | orchestrator | 2025-02-19 08:25:15.130535 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-02-19 08:25:15.130585 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-02-19 08:25:15.214593 | orchestrator | 2025-02-19 08:25:15.214750 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-02-19 08:25:15.214789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-02-19 08:25:15.983382 | orchestrator | 2025-02-19 08:25:15.983541 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-02-19 08:25:15.983584 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:16.638353 | orchestrator | 2025-02-19 08:25:16.638489 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-02-19 08:25:16.638526 | orchestrator | ok: [testbed-manager] 2025-02-19 08:25:16.701487 | orchestrator | 2025-02-19 08:25:16.701594 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-02-19 08:25:16.701676 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:25:16.764457 | orchestrator | 2025-02-19 08:25:16.764573 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-02-19 08:25:16.764610 | orchestrator | ok: [testbed-manager] 2025-02-19 08:25:17.658989 | orchestrator | 2025-02-19 08:25:17.659106 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-02-19 08:25:17.659139 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:40.527155 | orchestrator | 2025-02-19 08:25:40.527289 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-02-19 08:25:40.527327 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:41.213772 | orchestrator | 2025-02-19 08:25:41.214807 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-02-19 08:25:41.214870 | orchestrator | ok: [testbed-manager] 2025-02-19 08:25:44.783386 | orchestrator | 2025-02-19 08:25:44.783531 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-02-19 08:25:44.783583 | orchestrator | changed: [testbed-manager] 2025-02-19 08:25:44.845281 | orchestrator | 2025-02-19 08:25:44.845401 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-02-19 08:25:44.845437 | orchestrator | ok: [testbed-manager] 2025-02-19 08:25:44.919053 | orchestrator | 2025-02-19 08:25:44.919167 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-02-19 08:25:44.919186 | orchestrator | 2025-02-19 08:25:44.919202 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-02-19 08:25:44.919233 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:26:44.977096 | orchestrator | 2025-02-19 08:26:44.978251 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-02-19 08:26:44.978339 | orchestrator | Pausing for 60 seconds 2025-02-19 08:26:50.177353 | orchestrator | changed: [testbed-manager] 2025-02-19 08:26:50.177458 | orchestrator | 2025-02-19 08:26:50.177473 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-02-19 08:26:50.177495 | orchestrator | changed: [testbed-manager] 2025-02-19 08:27:32.290129 | orchestrator | 2025-02-19 08:27:32.290303 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-02-19 08:27:32.290347 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-02-19 08:27:38.379322 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-02-19 08:27:38.380215 | orchestrator | changed: [testbed-manager] 2025-02-19 08:27:38.380266 | orchestrator | 2025-02-19 08:27:38.380274 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-02-19 08:27:38.380292 | orchestrator | changed: [testbed-manager] 2025-02-19 08:27:38.463196 | orchestrator | 2025-02-19 08:27:38.463306 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-02-19 08:27:38.463339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-02-19 08:27:38.535505 | orchestrator | 2025-02-19 08:27:38.535662 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-02-19 08:27:38.535681 | orchestrator | 2025-02-19 08:27:38.535696 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-02-19 08:27:38.535726 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:27:38.680951 | orchestrator | 2025-02-19 08:27:38.681069 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:27:38.681101 | orchestrator | testbed-manager : ok=103 changed=54 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-02-19 08:27:38.681124 | orchestrator | 2025-02-19 08:27:38.681171 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-02-19 08:27:38.687119 | orchestrator | + deactivate 2025-02-19 08:27:38.687198 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-02-19 08:27:38.687233 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-19 08:27:38.687263 | orchestrator | + export PATH 2025-02-19 08:27:38.687291 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-02-19 08:27:38.687315 | orchestrator | + '[' -n '' ']' 2025-02-19 08:27:38.687330 | orchestrator | + hash -r 2025-02-19 08:27:38.687343 | orchestrator | + '[' -n '' ']' 2025-02-19 08:27:38.687357 | orchestrator | + unset VIRTUAL_ENV 2025-02-19 08:27:38.687371 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-02-19 08:27:38.687385 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-02-19 08:27:38.687399 | orchestrator | + unset -f deactivate 2025-02-19 08:27:38.687413 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-02-19 08:27:38.687440 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-02-19 08:27:38.688123 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-02-19 08:27:38.688166 | orchestrator | + local max_attempts=60 2025-02-19 08:27:38.688189 | orchestrator | + local name=ceph-ansible 2025-02-19 08:27:38.688212 | orchestrator | + local attempt_num=1 2025-02-19 08:27:38.688244 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-02-19 08:27:38.724675 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-19 08:27:38.725013 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-02-19 08:27:38.725052 | orchestrator | + local max_attempts=60 2025-02-19 08:27:38.725067 | orchestrator | + local name=kolla-ansible 2025-02-19 08:27:38.725082 | orchestrator | + local attempt_num=1 2025-02-19 08:27:38.725102 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-02-19 08:27:38.762562 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-19 08:27:38.763700 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-02-19 08:27:38.763779 | orchestrator | + local max_attempts=60 2025-02-19 08:27:38.763798 | orchestrator | + local name=osism-ansible 2025-02-19 08:27:38.763813 | orchestrator | + local attempt_num=1 2025-02-19 08:27:38.763837 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-02-19 08:27:38.798906 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-19 08:27:40.140842 | orchestrator | + [[ true == \t\r\u\e ]] 2025-02-19 08:27:40.140929 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-02-19 08:27:40.140951 | orchestrator | ++ semver latest 8.0.0 2025-02-19 08:27:40.193408 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-19 08:27:40.194458 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-19 08:27:40.194503 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-02-19 08:27:40.194515 | orchestrator | + local max_attempts=60 2025-02-19 08:27:40.194524 | orchestrator | + local name=netbox-netbox-1 2025-02-19 08:27:40.194534 | orchestrator | + local attempt_num=1 2025-02-19 08:27:40.194552 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-02-19 08:27:40.236201 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-19 08:27:40.245288 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-02-19 08:27:40.245388 | orchestrator | + set -e 2025-02-19 08:27:41.846395 | orchestrator | + osism netbox import 2025-02-19 08:27:41.846528 | orchestrator | 2025-02-19 08:27:41 | INFO  | Task c0fed07f-f242-4c99-bf82-78eb807afa17 is running. Wait. No more output. 2025-02-19 08:27:46.032111 | orchestrator | + osism netbox init 2025-02-19 08:27:47.471816 | orchestrator | 2025-02-19 08:27:47 | INFO  | Task 78a78589-beea-4249-8551-33ed37f304d1 was prepared for execution. 2025-02-19 08:27:47.472089 | orchestrator | 2025-02-19 08:27:47 | INFO  | It takes a moment until task 78a78589-beea-4249-8551-33ed37f304d1 has been started and output is visible here. 2025-02-19 08:27:49.282339 | orchestrator | 2025-02-19 08:27:49.285137 | orchestrator | PLAY [Wait for netbox service] ************************************************* 2025-02-19 08:27:49.285160 | orchestrator | 2025-02-19 08:27:49.285498 | orchestrator | TASK [Wait for netbox service] ************************************************* 2025-02-19 08:27:55.788853 | orchestrator | [WARNING]: Platform linux on host localhost is using the discovered Python 2025-02-19 08:27:55.789056 | orchestrator | interpreter at /usr/local/bin/python3.13, but future installation of another 2025-02-19 08:27:55.789078 | orchestrator | Python interpreter could change the meaning of that path. See 2025-02-19 08:27:55.789096 | orchestrator | https://docs.ansible.com/ansible- 2025-02-19 08:27:55.789748 | orchestrator | core/2.18/reference_appendices/interpreter_discovery.html for more information. 2025-02-19 08:27:55.800182 | orchestrator | ok: [localhost] 2025-02-19 08:27:55.800727 | orchestrator | 2025-02-19 08:27:55.801229 | orchestrator | PLAY [Manage sites and locations] ********************************************** 2025-02-19 08:27:55.801718 | orchestrator | 2025-02-19 08:27:55.801827 | orchestrator | TASK [Manage Discworld site] *************************************************** 2025-02-19 08:27:57.263633 | orchestrator | changed: [localhost] 2025-02-19 08:27:57.263754 | orchestrator | 2025-02-19 08:27:57.263780 | orchestrator | TASK [Manage Ankh-Morpork location] ******************************************** 2025-02-19 08:27:58.816104 | orchestrator | changed: [localhost] 2025-02-19 08:27:58.816370 | orchestrator | 2025-02-19 08:27:58.816411 | orchestrator | PLAY [Manage IP prefixes] ****************************************************** 2025-02-19 08:27:58.817052 | orchestrator | 2025-02-19 08:27:58.817796 | orchestrator | TASK [Manage 192.168.16.0/20] ************************************************** 2025-02-19 08:28:00.335128 | orchestrator | changed: [localhost] 2025-02-19 08:28:00.335653 | orchestrator | 2025-02-19 08:28:00.335687 | orchestrator | TASK [Manage 192.168.112.0/20] ************************************************* 2025-02-19 08:28:01.702507 | orchestrator | changed: [localhost] 2025-02-19 08:28:01.703213 | orchestrator | 2025-02-19 08:28:01.703285 | orchestrator | PLAY [Manage IP addresses] ***************************************************** 2025-02-19 08:28:01.703941 | orchestrator | 2025-02-19 08:28:01.704288 | orchestrator | TASK [Manage api.testbed.osism.xyz IP address] ********************************* 2025-02-19 08:28:03.125908 | orchestrator | changed: [localhost] 2025-02-19 08:28:04.351038 | orchestrator | 2025-02-19 08:28:04.351135 | orchestrator | TASK [Manage api-int.testbed.osism.xyz IP address] ***************************** 2025-02-19 08:28:04.351157 | orchestrator | changed: [localhost] 2025-02-19 08:28:04.351734 | orchestrator | 2025-02-19 08:28:04.352477 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:28:04.352870 | orchestrator | 2025-02-19 08:28:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:28:04.353050 | orchestrator | 2025-02-19 08:28:04 | INFO  | Please wait and do not abort execution. 2025-02-19 08:28:04.354391 | orchestrator | localhost : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:28:04.355162 | orchestrator | 2025-02-19 08:28:04.702700 | orchestrator | + osism netbox manage 1000 2025-02-19 08:28:06.131730 | orchestrator | 2025-02-19 08:28:06 | INFO  | Task 017667f7-a7b7-4633-a163-1efb7b22bb0c was prepared for execution. 2025-02-19 08:28:07.914481 | orchestrator | 2025-02-19 08:28:06 | INFO  | It takes a moment until task 017667f7-a7b7-4633-a163-1efb7b22bb0c has been started and output is visible here. 2025-02-19 08:28:07.914703 | orchestrator | 2025-02-19 08:28:07.915743 | orchestrator | PLAY [Manage rack 1000] ******************************************************** 2025-02-19 08:28:07.915789 | orchestrator | 2025-02-19 08:28:07.916496 | orchestrator | TASK [Manage rack 1000] ******************************************************** 2025-02-19 08:28:09.982305 | orchestrator | changed: [localhost] 2025-02-19 08:28:09.983037 | orchestrator | 2025-02-19 08:28:09.983306 | orchestrator | TASK [Manage testbed-switch-0] ************************************************* 2025-02-19 08:28:16.645647 | orchestrator | changed: [localhost] 2025-02-19 08:28:23.648230 | orchestrator | 2025-02-19 08:28:23.648364 | orchestrator | TASK [Manage testbed-switch-1] ************************************************* 2025-02-19 08:28:23.648401 | orchestrator | changed: [localhost] 2025-02-19 08:28:30.425047 | orchestrator | 2025-02-19 08:28:30.426087 | orchestrator | TASK [Manage testbed-switch-2] ************************************************* 2025-02-19 08:28:30.426145 | orchestrator | changed: [localhost] 2025-02-19 08:28:32.986435 | orchestrator | 2025-02-19 08:28:32.986636 | orchestrator | TASK [Manage testbed-manager] ************************************************** 2025-02-19 08:28:32.986697 | orchestrator | changed: [localhost] 2025-02-19 08:28:32.987368 | orchestrator | 2025-02-19 08:28:32.987404 | orchestrator | TASK [Manage testbed-node-0] *************************************************** 2025-02-19 08:28:35.343757 | orchestrator | changed: [localhost] 2025-02-19 08:28:35.343979 | orchestrator | 2025-02-19 08:28:35.344719 | orchestrator | TASK [Manage testbed-node-1] *************************************************** 2025-02-19 08:28:37.784996 | orchestrator | changed: [localhost] 2025-02-19 08:28:40.145309 | orchestrator | 2025-02-19 08:28:40.145501 | orchestrator | TASK [Manage testbed-node-2] *************************************************** 2025-02-19 08:28:40.145547 | orchestrator | changed: [localhost] 2025-02-19 08:28:40.146844 | orchestrator | 2025-02-19 08:28:40.146885 | orchestrator | TASK [Manage testbed-node-3] *************************************************** 2025-02-19 08:28:42.515053 | orchestrator | changed: [localhost] 2025-02-19 08:28:42.515308 | orchestrator | 2025-02-19 08:28:42.515764 | orchestrator | TASK [Manage testbed-node-4] *************************************************** 2025-02-19 08:28:45.195420 | orchestrator | changed: [localhost] 2025-02-19 08:28:45.195763 | orchestrator | 2025-02-19 08:28:45.196485 | orchestrator | TASK [Manage testbed-node-5] *************************************************** 2025-02-19 08:28:47.428391 | orchestrator | changed: [localhost] 2025-02-19 08:28:49.806319 | orchestrator | 2025-02-19 08:28:49.806457 | orchestrator | TASK [Manage testbed-node-6] *************************************************** 2025-02-19 08:28:49.806495 | orchestrator | changed: [localhost] 2025-02-19 08:28:49.807012 | orchestrator | 2025-02-19 08:28:49.807049 | orchestrator | TASK [Manage testbed-node-7] *************************************************** 2025-02-19 08:28:52.058931 | orchestrator | changed: [localhost] 2025-02-19 08:28:54.732384 | orchestrator | 2025-02-19 08:28:54.732504 | orchestrator | TASK [Manage testbed-node-8] *************************************************** 2025-02-19 08:28:54.732536 | orchestrator | changed: [localhost] 2025-02-19 08:28:54.733748 | orchestrator | 2025-02-19 08:28:54.734424 | orchestrator | TASK [Manage testbed-node-9] *************************************************** 2025-02-19 08:28:57.108853 | orchestrator | changed: [localhost] 2025-02-19 08:28:57.109156 | orchestrator | 2025-02-19 08:28:57.109989 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:28:57.110511 | orchestrator | 2025-02-19 08:28:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:28:57.110708 | orchestrator | 2025-02-19 08:28:57 | INFO  | Please wait and do not abort execution. 2025-02-19 08:28:57.113016 | orchestrator | localhost : ok=15 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:28:57.113184 | orchestrator | 2025-02-19 08:28:57.452420 | orchestrator | + osism netbox connect 1000 --state a 2025-02-19 08:28:58.978539 | orchestrator | 2025-02-19 08:28:58 | INFO  | Task 43b50eb3-5e10-4fbe-9230-a50f5964ee1e for device testbed-node-7 is running in background 2025-02-19 08:28:58.981691 | orchestrator | 2025-02-19 08:28:58 | INFO  | Task d8387019-a391-4877-ac20-3cf2032bf97e for device testbed-node-8 is running in background 2025-02-19 08:28:58.985742 | orchestrator | 2025-02-19 08:28:58 | INFO  | Task 07e6b54d-2d70-4e7a-b9b9-da9aedd803d0 for device testbed-switch-1 is running in background 2025-02-19 08:28:58.989420 | orchestrator | 2025-02-19 08:28:58 | INFO  | Task 23949723-f726-4ab9-93b2-81808306e604 for device testbed-node-9 is running in background 2025-02-19 08:28:58.994994 | orchestrator | 2025-02-19 08:28:58 | INFO  | Task 95bcfa80-650a-4ea5-9570-dd8cce82dee2 for device testbed-node-3 is running in background 2025-02-19 08:28:58.998444 | orchestrator | 2025-02-19 08:28:58 | INFO  | Task b1cb44f0-03c0-45c5-a1de-a7ac4ba7629a for device testbed-node-2 is running in background 2025-02-19 08:28:59.006565 | orchestrator | 2025-02-19 08:28:59 | INFO  | Task bec576e8-d15a-428d-a47d-4ac7d56e40b5 for device testbed-node-5 is running in background 2025-02-19 08:28:59.018125 | orchestrator | 2025-02-19 08:28:59 | INFO  | Task c1eaa9b8-02d3-476d-8521-5b9eadd6be66 for device testbed-node-4 is running in background 2025-02-19 08:28:59.027310 | orchestrator | 2025-02-19 08:28:59 | INFO  | Task 0ed40d43-4dfb-4420-993a-643485238efc for device testbed-manager is running in background 2025-02-19 08:28:59.030994 | orchestrator | 2025-02-19 08:28:59 | INFO  | Task 320741f5-4c79-4328-8f23-143a09e4d3dd for device testbed-switch-0 is running in background 2025-02-19 08:28:59.035992 | orchestrator | 2025-02-19 08:28:59 | INFO  | Task 54ca0930-7217-4941-8894-b7df0a85a7c3 for device testbed-switch-2 is running in background 2025-02-19 08:28:59.038503 | orchestrator | 2025-02-19 08:28:59 | INFO  | Task bce7c7b9-3ea7-45d3-b882-83797250d413 for device testbed-node-6 is running in background 2025-02-19 08:28:59.041623 | orchestrator | 2025-02-19 08:28:59 | INFO  | Task f812c56a-9dc5-412c-bdfc-85e605ef338b for device testbed-node-0 is running in background 2025-02-19 08:28:59.048207 | orchestrator | 2025-02-19 08:28:59 | INFO  | Task 24c32be0-492a-49d3-914a-5ca205b6a439 for device testbed-node-1 is running in background 2025-02-19 08:28:59.289961 | orchestrator | 2025-02-19 08:28:59 | INFO  | Tasks are running in background. No more output. Check Flower for logs. 2025-02-19 08:28:59.290155 | orchestrator | + osism netbox disable --no-wait testbed-switch-0 2025-02-19 08:29:01.010953 | orchestrator | + osism netbox disable --no-wait testbed-switch-1 2025-02-19 08:29:02.739717 | orchestrator | + osism netbox disable --no-wait testbed-switch-2 2025-02-19 08:29:04.469622 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-02-19 08:29:04.792808 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-02-19 08:29:04.799698 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:quincy "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2025-02-19 08:29:04.799773 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2025-02-19 08:29:04.799790 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-02-19 08:29:04.799805 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 3 minutes ago Up 3 minutes (healthy) 8000/tcp 2025-02-19 08:29:04.799833 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" beat 3 minutes ago Up 3 minutes (healthy) 2025-02-19 08:29:04.799849 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" conductor 3 minutes ago Up 3 minutes (healthy) 2025-02-19 08:29:04.799864 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" flower 3 minutes ago Up 3 minutes (healthy) 2025-02-19 08:29:04.799879 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2025-02-19 08:29:04.799893 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" listener 3 minutes ago Up 3 minutes (healthy) 2025-02-19 08:29:04.799937 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb 3 minutes ago Up 3 minutes (healthy) 3306/tcp 2025-02-19 08:29:04.799953 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism-netbox:latest "/usr/bin/tini -- os…" netbox 3 minutes ago Up 3 minutes (healthy) 2025-02-19 08:29:04.799967 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" openstack 3 minutes ago Up 3 minutes (healthy) 2025-02-19 08:29:04.799981 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 3 minutes ago Up 3 minutes (healthy) 6379/tcp 2025-02-19 08:29:04.799999 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" watchdog 3 minutes ago Up 3 minutes (healthy) 2025-02-19 08:29:04.800014 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2025-02-19 08:29:04.800028 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2025-02-19 08:29:04.800043 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/usr/bin/tini -- sl…" osismclient 3 minutes ago Up 3 minutes (healthy) 2025-02-19 08:29:04.800071 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-02-19 08:29:05.043423 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-02-19 08:29:05.050469 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.10 "/usr/bin/tini -- /o…" netbox 9 minutes ago Up 8 minutes (healthy) 2025-02-19 08:29:05.050552 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.10 "/opt/netbox/venv/bi…" netbox-worker 9 minutes ago Up 4 minutes (healthy) 2025-02-19 08:29:05.050617 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 9 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-02-19 08:29:05.050645 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 9 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-02-19 08:29:05.050676 | orchestrator | ++ semver latest 7.0.0 2025-02-19 08:29:05.101794 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-19 08:29:05.106937 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-19 08:29:05.107004 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-02-19 08:29:05.107031 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-02-19 08:29:06.638626 | orchestrator | 2025-02-19 08:29:06 | INFO  | Task 073b3779-e8eb-4a8a-a843-509341323181 (resolvconf) was prepared for execution. 2025-02-19 08:29:09.974963 | orchestrator | 2025-02-19 08:29:06 | INFO  | It takes a moment until task 073b3779-e8eb-4a8a-a843-509341323181 (resolvconf) has been started and output is visible here. 2025-02-19 08:29:09.975089 | orchestrator | 2025-02-19 08:29:09.976702 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-02-19 08:29:09.976734 | orchestrator | 2025-02-19 08:29:14.329193 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-19 08:29:14.329324 | orchestrator | Wednesday 19 February 2025 08:29:09 +0000 (0:00:00.091) 0:00:00.091 **** 2025-02-19 08:29:14.329360 | orchestrator | ok: [testbed-manager] 2025-02-19 08:29:14.331432 | orchestrator | 2025-02-19 08:29:14.331487 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-02-19 08:29:14.374535 | orchestrator | Wednesday 19 February 2025 08:29:14 +0000 (0:00:04.355) 0:00:04.447 **** 2025-02-19 08:29:14.374743 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:29:14.377660 | orchestrator | 2025-02-19 08:29:14.378501 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-02-19 08:29:14.379714 | orchestrator | Wednesday 19 February 2025 08:29:14 +0000 (0:00:00.045) 0:00:04.492 **** 2025-02-19 08:29:14.467810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-02-19 08:29:14.468160 | orchestrator | 2025-02-19 08:29:14.468286 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-02-19 08:29:14.468322 | orchestrator | Wednesday 19 February 2025 08:29:14 +0000 (0:00:00.091) 0:00:04.584 **** 2025-02-19 08:29:14.542817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-02-19 08:29:14.543747 | orchestrator | 2025-02-19 08:29:14.545818 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-02-19 08:29:15.352776 | orchestrator | Wednesday 19 February 2025 08:29:14 +0000 (0:00:00.077) 0:00:04.662 **** 2025-02-19 08:29:15.352909 | orchestrator | ok: [testbed-manager] 2025-02-19 08:29:15.354894 | orchestrator | 2025-02-19 08:29:15.354934 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-02-19 08:29:15.398065 | orchestrator | Wednesday 19 February 2025 08:29:15 +0000 (0:00:00.808) 0:00:05.471 **** 2025-02-19 08:29:15.398180 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:29:15.399601 | orchestrator | 2025-02-19 08:29:15.399625 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-02-19 08:29:15.399642 | orchestrator | Wednesday 19 February 2025 08:29:15 +0000 (0:00:00.046) 0:00:05.517 **** 2025-02-19 08:29:15.842556 | orchestrator | ok: [testbed-manager] 2025-02-19 08:29:15.847720 | orchestrator | 2025-02-19 08:29:15.911744 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-02-19 08:29:15.911879 | orchestrator | Wednesday 19 February 2025 08:29:15 +0000 (0:00:00.443) 0:00:05.960 **** 2025-02-19 08:29:15.911924 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:29:16.397200 | orchestrator | 2025-02-19 08:29:16.397354 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-02-19 08:29:16.397388 | orchestrator | Wednesday 19 February 2025 08:29:15 +0000 (0:00:00.069) 0:00:06.030 **** 2025-02-19 08:29:16.397433 | orchestrator | changed: [testbed-manager] 2025-02-19 08:29:17.344215 | orchestrator | 2025-02-19 08:29:17.344334 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-02-19 08:29:17.344353 | orchestrator | Wednesday 19 February 2025 08:29:16 +0000 (0:00:00.484) 0:00:06.514 **** 2025-02-19 08:29:17.344383 | orchestrator | changed: [testbed-manager] 2025-02-19 08:29:17.345780 | orchestrator | 2025-02-19 08:29:17.347975 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-02-19 08:29:17.348087 | orchestrator | Wednesday 19 February 2025 08:29:17 +0000 (0:00:00.943) 0:00:07.458 **** 2025-02-19 08:29:18.179193 | orchestrator | ok: [testbed-manager] 2025-02-19 08:29:18.329839 | orchestrator | 2025-02-19 08:29:18.329930 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-02-19 08:29:18.329969 | orchestrator | Wednesday 19 February 2025 08:29:18 +0000 (0:00:00.839) 0:00:08.297 **** 2025-02-19 08:29:19.429124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-02-19 08:29:19.429243 | orchestrator | 2025-02-19 08:29:19.429265 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-02-19 08:29:19.429282 | orchestrator | Wednesday 19 February 2025 08:29:18 +0000 (0:00:00.076) 0:00:08.373 **** 2025-02-19 08:29:19.429316 | orchestrator | changed: [testbed-manager] 2025-02-19 08:29:19.430749 | orchestrator | 2025-02-19 08:29:19.430783 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:29:19.430798 | orchestrator | 2025-02-19 08:29:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:29:19.430813 | orchestrator | 2025-02-19 08:29:19 | INFO  | Please wait and do not abort execution. 2025-02-19 08:29:19.430836 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-19 08:29:19.431007 | orchestrator | 2025-02-19 08:29:19.431037 | orchestrator | 2025-02-19 08:29:19.431684 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:29:19.432004 | orchestrator | Wednesday 19 February 2025 08:29:19 +0000 (0:00:01.169) 0:00:09.543 **** 2025-02-19 08:29:19.432472 | orchestrator | =============================================================================== 2025-02-19 08:29:19.435991 | orchestrator | Gathering Facts --------------------------------------------------------- 4.36s 2025-02-19 08:29:19.437089 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2025-02-19 08:29:19.437289 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.94s 2025-02-19 08:29:19.437929 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.84s 2025-02-19 08:29:19.440796 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.81s 2025-02-19 08:29:19.441397 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.48s 2025-02-19 08:29:19.441765 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.44s 2025-02-19 08:29:19.442287 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-02-19 08:29:19.442816 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-02-19 08:29:19.443178 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-02-19 08:29:19.443690 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-02-19 08:29:19.445741 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-02-19 08:29:19.446124 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-02-19 08:29:19.937223 | orchestrator | + osism apply sshconfig 2025-02-19 08:29:21.471187 | orchestrator | 2025-02-19 08:29:21 | INFO  | Task 7b4c4f69-57a2-48e8-8d6f-d5c4905f4648 (sshconfig) was prepared for execution. 2025-02-19 08:29:24.831337 | orchestrator | 2025-02-19 08:29:21 | INFO  | It takes a moment until task 7b4c4f69-57a2-48e8-8d6f-d5c4905f4648 (sshconfig) has been started and output is visible here. 2025-02-19 08:29:24.831526 | orchestrator | 2025-02-19 08:29:24.833367 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-02-19 08:29:24.833444 | orchestrator | 2025-02-19 08:29:25.460109 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-02-19 08:29:25.460221 | orchestrator | Wednesday 19 February 2025 08:29:24 +0000 (0:00:00.117) 0:00:00.117 **** 2025-02-19 08:29:25.460252 | orchestrator | ok: [testbed-manager] 2025-02-19 08:29:25.460459 | orchestrator | 2025-02-19 08:29:25.460482 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-02-19 08:29:25.460499 | orchestrator | Wednesday 19 February 2025 08:29:25 +0000 (0:00:00.632) 0:00:00.749 **** 2025-02-19 08:29:25.994141 | orchestrator | changed: [testbed-manager] 2025-02-19 08:29:25.994462 | orchestrator | 2025-02-19 08:29:25.996940 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-02-19 08:29:25.997031 | orchestrator | Wednesday 19 February 2025 08:29:25 +0000 (0:00:00.533) 0:00:01.283 **** 2025-02-19 08:29:31.706386 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-02-19 08:29:31.706658 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-02-19 08:29:31.706745 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-02-19 08:29:31.707204 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-02-19 08:29:31.707460 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-02-19 08:29:31.707802 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-02-19 08:29:31.708054 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-02-19 08:29:31.712542 | orchestrator | 2025-02-19 08:29:31.789299 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-02-19 08:29:31.789386 | orchestrator | Wednesday 19 February 2025 08:29:31 +0000 (0:00:05.713) 0:00:06.996 **** 2025-02-19 08:29:31.789408 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:29:32.405188 | orchestrator | 2025-02-19 08:29:32.405314 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-02-19 08:29:32.405336 | orchestrator | Wednesday 19 February 2025 08:29:31 +0000 (0:00:00.081) 0:00:07.078 **** 2025-02-19 08:29:32.405464 | orchestrator | changed: [testbed-manager] 2025-02-19 08:29:32.405488 | orchestrator | 2025-02-19 08:29:32.405508 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:29:32.405799 | orchestrator | 2025-02-19 08:29:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:29:32.406768 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:29:32.407064 | orchestrator | 2025-02-19 08:29:32 | INFO  | Please wait and do not abort execution. 2025-02-19 08:29:32.407096 | orchestrator | 2025-02-19 08:29:32.407466 | orchestrator | 2025-02-19 08:29:32.408969 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:29:32.409267 | orchestrator | Wednesday 19 February 2025 08:29:32 +0000 (0:00:00.614) 0:00:07.693 **** 2025-02-19 08:29:32.409706 | orchestrator | =============================================================================== 2025-02-19 08:29:32.409737 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.71s 2025-02-19 08:29:32.409861 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.63s 2025-02-19 08:29:32.410106 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-02-19 08:29:32.410375 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-02-19 08:29:32.410767 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-02-19 08:29:32.893916 | orchestrator | + osism apply known-hosts 2025-02-19 08:29:34.458519 | orchestrator | 2025-02-19 08:29:34 | INFO  | Task aa34117e-af23-4235-a344-9d5b822c11cd (known-hosts) was prepared for execution. 2025-02-19 08:29:37.817786 | orchestrator | 2025-02-19 08:29:34 | INFO  | It takes a moment until task aa34117e-af23-4235-a344-9d5b822c11cd (known-hosts) has been started and output is visible here. 2025-02-19 08:29:37.817916 | orchestrator | 2025-02-19 08:29:37.821748 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-02-19 08:29:37.823102 | orchestrator | 2025-02-19 08:29:37.823371 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-02-19 08:29:37.823804 | orchestrator | Wednesday 19 February 2025 08:29:37 +0000 (0:00:00.146) 0:00:00.146 **** 2025-02-19 08:29:43.808961 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-02-19 08:29:43.809546 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-02-19 08:29:43.810756 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-02-19 08:29:43.810998 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-02-19 08:29:43.812612 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-02-19 08:29:43.814088 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-02-19 08:29:43.815131 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-02-19 08:29:43.816071 | orchestrator | 2025-02-19 08:29:43.816531 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-02-19 08:29:43.816888 | orchestrator | Wednesday 19 February 2025 08:29:43 +0000 (0:00:05.993) 0:00:06.140 **** 2025-02-19 08:29:44.003878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-02-19 08:29:44.004203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-02-19 08:29:44.005110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-02-19 08:29:44.006344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-02-19 08:29:44.006784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-02-19 08:29:44.008273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-02-19 08:29:44.008915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-02-19 08:29:44.009840 | orchestrator | 2025-02-19 08:29:44.010246 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:29:44.011118 | orchestrator | Wednesday 19 February 2025 08:29:43 +0000 (0:00:00.194) 0:00:06.335 **** 2025-02-19 08:29:45.256530 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNqoVXreYOUDYba5OQMeIsIm1tZKxiGw2dqblyxv0+KM7/eeXHlrVLMchfwcfZc/K1q4i96W2LgWlp/XslS97rA=) 2025-02-19 08:29:45.258095 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKqbmGqIf+KeLXEGy2qebfV31RSn2JKPM8LQL4/I7+ed) 2025-02-19 08:29:45.258183 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwKHFsOwdeJwHGaXJ1riQqjRBBvizaVW+TpD+vhA+PohqVf+L9CDnkPAl3ouckSZxQjLkDeRHiqhFwBwebvl2lHXilRvHzg6g+fwvXFkmgwoaackI8Xj2czpZmCad+Iw8PjSpF59nmbF3N9ubMKnXLx/0MSB83EqeQOyyZQEzOCseXDHDzIbcGwRnK8DjQyJgUO/OdyH9DkCPm0Yr79nEOQ6j4dZxf31tOPPKsASuHcE9b9svJUuvhEYvu1VsGCwzSDc+NrGg3c6Vtrel0mHakCy4mMrv8ySeM9/jIk+3tvdA+75SgYXXbyMlQHDVnxL1buNgSZIPip7KA637S01Q3Dtfhk8140aKG5whhfyyHLDAj8UNbjMU2VOLNUCGCbUZtTI1DEhhKPme7PdiBLeAmndfuyzDeRLooZc0SDGXGpVxg9WBfvT0oQFln9Os781A0Dpu6zSxCBvX28AXUiAzna9O944Q4IuXX7U5UYqVoD6mn2gb5ATMgRJa0rAWQR8U=) 2025-02-19 08:29:45.259625 | orchestrator | 2025-02-19 08:29:45.260555 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:29:45.261886 | orchestrator | Wednesday 19 February 2025 08:29:45 +0000 (0:00:01.251) 0:00:07.586 **** 2025-02-19 08:29:46.432798 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNel9Hdg4ki5Tjcwrj91l7l8O9Q2MJkIvRpgRXnRZ94w80bOZ+FptGhAr6qiGAtWxaW3hz/G2Z5CRoaJFJ/E7M16DBjjoq+0UYGm8nh8oBRR+/dkB9z7SsEmOel+Z8T1ytXsHYo4QJKj5AhL9529C5momOOTEKKN5UsG2eLqSCKZQZjHNi5q6Ev3uLMvh6caj2UR/34jDTh9M+/WswlAhHo0dfNe+Vm31plK1QqmB2PIDAP2sziz3pIhLSoQ3EoQ1i5z/Xekki8E4dAQ8n7Ls8I0PXIyYY/hhd2XUnwcjiL71LpTefSzA6a503AmT7z9gcKmKUc92IZICTAq7zAKJXaoiDX6cDNcxh0VX9LHPOxGT33byrPhCDrmXheu1iGDeSn044ebVg0OdCouZD/iQ60JSeNREGor15/Gqye+886aOvEPFZVg0tq8ZHdhkq3q+TTlQPARwg38nh8n+6vqhs0qDG6JyYy9FTsyTJECuKdQgue+HVMWJBEOEk3oyIVB8=) 2025-02-19 08:29:46.433162 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGCUtesQWMTl/ckMX/DusQDjuM5/7oS/CUFNpUCtXuE+ysz9ogqcsTgFUKjXO3CaAkYhWjS4EdS5MvTmd1ZuZOs=) 2025-02-19 08:29:46.433481 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPE3KyJUpfHeEib8iCb1/9pHR/C8rsTjEiUgN1a2lrG6) 2025-02-19 08:29:46.434218 | orchestrator | 2025-02-19 08:29:46.435025 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:29:46.435447 | orchestrator | Wednesday 19 February 2025 08:29:46 +0000 (0:00:01.176) 0:00:08.763 **** 2025-02-19 08:29:47.585841 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEc7TDhGvoc4JGvQxKWEaQjDDGTREgCV9Z8cV42R6f3dvqWanqQInDZpPkcgIrBRicSTh0j8WqWYxlo8Vqqnr+0=) 2025-02-19 08:29:47.586489 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpmoYs1xuk1pWC9LIj7bQCxhriYPfhhv1pKlmHrsB+QlFWsiBgBQDG+9PWNG37hDV2qV1nCVcXEmFUcVfyL3dJ1/mqs3ojqlGDZWwwgkhWJXpe4dKXGm8q3GiEvFm7QscxInH8UOI6irGxyM3IjksYm1yevZ/UrGcWmg9V69YQh3a2IvtWLb/7jMpMgOHgVhAbe0d1U2spgt2jJJW3Cls1lKFSAcbGPGYeRTVSj6hc+UZmlKLJeV4sAq6oLmwMAz682ooecGSpp1ngmFg/f6rBYSVuq1uso6AmgCFYJl+3NVqqCELRp71u5oUyrXgeRw5jsSTBKD5dK3uRqrWJsg3q6Ih1abE5qAvoSTlbcNeR2qtXUOJJr/QjsmeRFie7QP0Sm2J0vU53txaH+MF9hYzuaN5agcY0k4npYg0/c0vchg/5Wj2YewQVC5ga1DSfO1fVhaIndXE9XHkKUeUEKrlNTFAhyO1ZwHnDbmuRQsAwvn9gEW3za74Nt43b94E5d4E=) 2025-02-19 08:29:47.587060 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA064cIAlPRhmQTUhSnmMtHd/7gPPBpF37xFs12l0zSu) 2025-02-19 08:29:47.588677 | orchestrator | 2025-02-19 08:29:47.589018 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:29:47.590090 | orchestrator | Wednesday 19 February 2025 08:29:47 +0000 (0:00:01.153) 0:00:09.916 **** 2025-02-19 08:29:48.676719 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEfTzPxjz4irVuyputdAtv/Zm/s2p+C6816ih4AVLiqMutrfjCFXAHxBhX4691+W6++IBhVvJiiMot6b2Oztc/w=) 2025-02-19 08:29:48.676962 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDblW0CXlqfM5zIPDi5juCxUMhsXQwKTWobV5HmdyVoZoG0ZYBsWtrZb1Wzh+2NV2aN5+sIzurY3DmcJjfO8CGb957iL5lLD+2Id/RlMVsfh9LezYX7hSgQoXv6xoc801XLFrp9Sg/pI1kN7+Y4fPCJ4of6mH6y7J0IacNhqFzPfkLcGCvzBJZP9yNdHoO4mnIwJGPrKmxNQPZ5FoPENF57c4fTx6Q0jUCQ0Us7CbcL1qRRxPurBvDnxwDMWItLbxii2Iyovg0mAPvrdzgF+If+tNHbooKa9cih9XHXGCenroieRB1rxCEWoxmtBclQPUqOZervBQjX6jfafK7rtP9TYZmaltySAlbSoLmkP4Dzy9QtJQTQrXtqjSk0v93GQq8uq5gnabxoUPOTKhGy7+asJL3pIxmG8g0jcCzkqbY3JAuRehY/i2bOEtcywfU033RYZwmaSoesPBcC7C5H7DVwniCtAax+8OSFOla8+qv6NS+W6tePFuHMJqXt9Vwq3zk=) 2025-02-19 08:29:48.677345 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFOiIZOceMx06hP8bF+ViuIyqyIMXOHDlhz3hSq2E91G) 2025-02-19 08:29:48.677460 | orchestrator | 2025-02-19 08:29:48.678136 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:29:48.678267 | orchestrator | Wednesday 19 February 2025 08:29:48 +0000 (0:00:01.092) 0:00:11.009 **** 2025-02-19 08:29:49.836987 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwA6a24OcKGnLirIO4CTzUJBDnZlRie0E1ogLfe7y/5VNOkJZ+jw/+UI58xfZFN04gPgilzGyJcp28nr03vf/OMjyLqW7E+0NyNyvnrjsU2Au4bxswK5ULLFy3bVVnDoFFeFURqqee87z3KHJP8qdEX4I2gIE1S/ItTLmgTlJhxtZllhOHTHTvrjNlJNX6bS8Y1XvazlknI4XnV7xOC6Pgt3EZUp++gpLUDV11YjwFa/htoUPpYqoGtfj7Zibk2jwsgvhghqVJZOXQx3LlddB0Eclc1qtLINH7bUMfRObOEGru4ZQMGMK0T0AaNdsFq0EBvnGcAK5YYdxJOjdLrfnmI2uwyRBm0Sc2mFy7wmuBC4FTysmuhvE1VbL8rN6vLaopUrrmB/EzmFCla/t2FMH/d9GgE+npQAsTrtvRy7Azkr59UkK2jFxX6jYHXtmPeFrw0cQ5EliUeuRPHZn9j+KcL5A43zTYTshKCDpEdYCGmO0HwewwEaEtEgVosSFRGX8=) 2025-02-19 08:29:49.838245 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHLoBPt1AEALALHV36ELIhiurBiTaKYlQc/KvpwgSFTLC5iqofQYRv7LAfn3TGDaQVJbzC3pZh0JKSs7Cnv61wE=) 2025-02-19 08:29:49.838300 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDhQhLe7YyC1DGzDoJd032SdiI1wnZBrbP/dSeV48DOD) 2025-02-19 08:29:49.838321 | orchestrator | 2025-02-19 08:29:49.838853 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:29:49.839773 | orchestrator | Wednesday 19 February 2025 08:29:49 +0000 (0:00:01.158) 0:00:12.167 **** 2025-02-19 08:29:50.957691 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9D2OW9/tbKqXOLbsxL1UAd36ltOI01WyVs46aBIMT/5yiGzGKvEFdrbJq2zEDw7Can88c3qW0gkcwNjm6rP7/o7zRbdSo+EDBBxDFd+4CJaAS2sUMW3tgQPo+ieRBVGaPyngPkTVoADmXlhvjdlDEhqxe8eNWQ9K4lUBYToXv/OIU872XsY+5jT5VPqsNZm6ZaQ4Od4+YDvjkeDBuBAckMaZMol5X2O3jPBOW/QMNa2c26qF0OR5aj3Js7dVzvgmJ1SSjm5Rt6FGjGfgngl7FKJD8wR22xLVpNwHkrQtZ7DVAFgHUIUZD+V9limLM+oohnPMpMK+pUB4tnw+vQ7PFK38Di9Kq31nBQgUnRRfmQa8eJYQXOk9tJZC94xEsH2LNm734hxpFAZH0t97i8GLWgQl5LV2hk9wfAb1biBIrNB2VxDA95peOCIcLY1xkoWMemjCeuJ+LW607EF4sNoQB/PYSS83kK7fLIpjFH4B/0cnyAKC6dkmupabdnJpNue8=) 2025-02-19 08:29:50.958623 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBE+7vsUF/SnAGPSVF1xAi3+0w3fnSVYcNqcbn8D9S/w2PME225KsZq4qskBNPhQNSf3niRpW8p4wHyARjGRTUI=) 2025-02-19 08:29:50.958748 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINGqfrxSucFMogLUrIPGy/ap4qIIZ7IKNPl44dsMn/8B) 2025-02-19 08:29:50.960109 | orchestrator | 2025-02-19 08:29:50.961093 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:29:50.962630 | orchestrator | Wednesday 19 February 2025 08:29:50 +0000 (0:00:01.121) 0:00:13.289 **** 2025-02-19 08:29:52.107370 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr7CLCd7bqATscqhIbDpw5o9ssWjIgFTVzvD8pf/m2DQbpTOfSFXve/12RJ8C/+whk63YJBigOIA4h0CRZtKVxEO23Z8vBmumjAy/6OETN9uIu22NuOjro6o0waapu1eupwtqqrfigrmzcprI0Il0BMHsIOiaxcnTdOyci0h7d5H5+5irACaJ8qREhyk2MJADOXYnt1MD6eWLVgzKOYLlvutvaUbxw7PpdtJcJfqgiNdFZS1xrBTXVMElI5yaVjXWQGJJAParHfzYiX9tXFpmEfE5QdIn2NYJULfb2GamJaJVVZHwq8U5rX++iRfUohr37iatOcRnHz9UjBy+H9JqaSFE+qD27mVa6xiy3EN88WJ3FTWZVi0Gj+gbIpwFvutMBJ8H0GFjMUwwEowJMfQTbRAzmtOjzV672RUx8yWksjeEbbO2Eo2D/xc33JVGSplAp32ingAp037kj38dthsCXqdqrLtykqFTQkybKG25afz3jj2HgAenbzFezHc2HtsU=) 2025-02-19 08:29:52.107688 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOLSJ/nk8Rs4YON994HnllFO3MSiqXmsrxOd9ljBuqNzj05VMbb0qXAVTO72ujfKm2AvPeC9GZDMt0jymTKTRy4=) 2025-02-19 08:29:52.107726 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID17eWjoClL4k0xU2b+coA7DiOV1WLX93RcJfIXA/PaF) 2025-02-19 08:29:52.108218 | orchestrator | 2025-02-19 08:29:52.109105 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-02-19 08:29:52.109813 | orchestrator | Wednesday 19 February 2025 08:29:52 +0000 (0:00:01.147) 0:00:14.436 **** 2025-02-19 08:29:57.622513 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-02-19 08:29:57.622977 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-02-19 08:29:57.623605 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-02-19 08:29:57.623634 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-02-19 08:29:57.623650 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-02-19 08:29:57.623671 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-02-19 08:29:57.625097 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-02-19 08:29:57.625392 | orchestrator | 2025-02-19 08:29:57.625955 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-02-19 08:29:57.626186 | orchestrator | Wednesday 19 February 2025 08:29:57 +0000 (0:00:05.514) 0:00:19.951 **** 2025-02-19 08:29:57.839160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-02-19 08:29:57.839892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-02-19 08:29:57.839977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-02-19 08:29:57.841746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-02-19 08:29:57.842318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-02-19 08:29:57.842350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-02-19 08:29:57.842666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-02-19 08:29:57.843174 | orchestrator | 2025-02-19 08:29:57.843904 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:29:57.844214 | orchestrator | Wednesday 19 February 2025 08:29:57 +0000 (0:00:00.220) 0:00:20.171 **** 2025-02-19 08:29:58.999885 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwKHFsOwdeJwHGaXJ1riQqjRBBvizaVW+TpD+vhA+PohqVf+L9CDnkPAl3ouckSZxQjLkDeRHiqhFwBwebvl2lHXilRvHzg6g+fwvXFkmgwoaackI8Xj2czpZmCad+Iw8PjSpF59nmbF3N9ubMKnXLx/0MSB83EqeQOyyZQEzOCseXDHDzIbcGwRnK8DjQyJgUO/OdyH9DkCPm0Yr79nEOQ6j4dZxf31tOPPKsASuHcE9b9svJUuvhEYvu1VsGCwzSDc+NrGg3c6Vtrel0mHakCy4mMrv8ySeM9/jIk+3tvdA+75SgYXXbyMlQHDVnxL1buNgSZIPip7KA637S01Q3Dtfhk8140aKG5whhfyyHLDAj8UNbjMU2VOLNUCGCbUZtTI1DEhhKPme7PdiBLeAmndfuyzDeRLooZc0SDGXGpVxg9WBfvT0oQFln9Os781A0Dpu6zSxCBvX28AXUiAzna9O944Q4IuXX7U5UYqVoD6mn2gb5ATMgRJa0rAWQR8U=) 2025-02-19 08:29:59.000211 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNqoVXreYOUDYba5OQMeIsIm1tZKxiGw2dqblyxv0+KM7/eeXHlrVLMchfwcfZc/K1q4i96W2LgWlp/XslS97rA=) 2025-02-19 08:29:59.000261 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKqbmGqIf+KeLXEGy2qebfV31RSn2JKPM8LQL4/I7+ed) 2025-02-19 08:29:59.000353 | orchestrator | 2025-02-19 08:29:59.000448 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:29:59.000487 | orchestrator | Wednesday 19 February 2025 08:29:58 +0000 (0:00:01.159) 0:00:21.331 **** 2025-02-19 08:30:00.110926 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNel9Hdg4ki5Tjcwrj91l7l8O9Q2MJkIvRpgRXnRZ94w80bOZ+FptGhAr6qiGAtWxaW3hz/G2Z5CRoaJFJ/E7M16DBjjoq+0UYGm8nh8oBRR+/dkB9z7SsEmOel+Z8T1ytXsHYo4QJKj5AhL9529C5momOOTEKKN5UsG2eLqSCKZQZjHNi5q6Ev3uLMvh6caj2UR/34jDTh9M+/WswlAhHo0dfNe+Vm31plK1QqmB2PIDAP2sziz3pIhLSoQ3EoQ1i5z/Xekki8E4dAQ8n7Ls8I0PXIyYY/hhd2XUnwcjiL71LpTefSzA6a503AmT7z9gcKmKUc92IZICTAq7zAKJXaoiDX6cDNcxh0VX9LHPOxGT33byrPhCDrmXheu1iGDeSn044ebVg0OdCouZD/iQ60JSeNREGor15/Gqye+886aOvEPFZVg0tq8ZHdhkq3q+TTlQPARwg38nh8n+6vqhs0qDG6JyYy9FTsyTJECuKdQgue+HVMWJBEOEk3oyIVB8=) 2025-02-19 08:30:00.111899 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGCUtesQWMTl/ckMX/DusQDjuM5/7oS/CUFNpUCtXuE+ysz9ogqcsTgFUKjXO3CaAkYhWjS4EdS5MvTmd1ZuZOs=) 2025-02-19 08:30:00.111958 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPE3KyJUpfHeEib8iCb1/9pHR/C8rsTjEiUgN1a2lrG6) 2025-02-19 08:30:00.112032 | orchestrator | 2025-02-19 08:30:01.274778 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:30:01.274943 | orchestrator | Wednesday 19 February 2025 08:30:00 +0000 (0:00:01.110) 0:00:22.442 **** 2025-02-19 08:30:01.274984 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpmoYs1xuk1pWC9LIj7bQCxhriYPfhhv1pKlmHrsB+QlFWsiBgBQDG+9PWNG37hDV2qV1nCVcXEmFUcVfyL3dJ1/mqs3ojqlGDZWwwgkhWJXpe4dKXGm8q3GiEvFm7QscxInH8UOI6irGxyM3IjksYm1yevZ/UrGcWmg9V69YQh3a2IvtWLb/7jMpMgOHgVhAbe0d1U2spgt2jJJW3Cls1lKFSAcbGPGYeRTVSj6hc+UZmlKLJeV4sAq6oLmwMAz682ooecGSpp1ngmFg/f6rBYSVuq1uso6AmgCFYJl+3NVqqCELRp71u5oUyrXgeRw5jsSTBKD5dK3uRqrWJsg3q6Ih1abE5qAvoSTlbcNeR2qtXUOJJr/QjsmeRFie7QP0Sm2J0vU53txaH+MF9hYzuaN5agcY0k4npYg0/c0vchg/5Wj2YewQVC5ga1DSfO1fVhaIndXE9XHkKUeUEKrlNTFAhyO1ZwHnDbmuRQsAwvn9gEW3za74Nt43b94E5d4E=) 2025-02-19 08:30:01.276627 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEc7TDhGvoc4JGvQxKWEaQjDDGTREgCV9Z8cV42R6f3dvqWanqQInDZpPkcgIrBRicSTh0j8WqWYxlo8Vqqnr+0=) 2025-02-19 08:30:01.276678 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA064cIAlPRhmQTUhSnmMtHd/7gPPBpF37xFs12l0zSu) 2025-02-19 08:30:01.276689 | orchestrator | 2025-02-19 08:30:01.276708 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:30:02.554103 | orchestrator | Wednesday 19 February 2025 08:30:01 +0000 (0:00:01.163) 0:00:23.605 **** 2025-02-19 08:30:02.554220 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDblW0CXlqfM5zIPDi5juCxUMhsXQwKTWobV5HmdyVoZoG0ZYBsWtrZb1Wzh+2NV2aN5+sIzurY3DmcJjfO8CGb957iL5lLD+2Id/RlMVsfh9LezYX7hSgQoXv6xoc801XLFrp9Sg/pI1kN7+Y4fPCJ4of6mH6y7J0IacNhqFzPfkLcGCvzBJZP9yNdHoO4mnIwJGPrKmxNQPZ5FoPENF57c4fTx6Q0jUCQ0Us7CbcL1qRRxPurBvDnxwDMWItLbxii2Iyovg0mAPvrdzgF+If+tNHbooKa9cih9XHXGCenroieRB1rxCEWoxmtBclQPUqOZervBQjX6jfafK7rtP9TYZmaltySAlbSoLmkP4Dzy9QtJQTQrXtqjSk0v93GQq8uq5gnabxoUPOTKhGy7+asJL3pIxmG8g0jcCzkqbY3JAuRehY/i2bOEtcywfU033RYZwmaSoesPBcC7C5H7DVwniCtAax+8OSFOla8+qv6NS+W6tePFuHMJqXt9Vwq3zk=) 2025-02-19 08:30:02.555291 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEfTzPxjz4irVuyputdAtv/Zm/s2p+C6816ih4AVLiqMutrfjCFXAHxBhX4691+W6++IBhVvJiiMot6b2Oztc/w=) 2025-02-19 08:30:02.556373 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFOiIZOceMx06hP8bF+ViuIyqyIMXOHDlhz3hSq2E91G) 2025-02-19 08:30:02.557783 | orchestrator | 2025-02-19 08:30:02.558534 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:30:02.559390 | orchestrator | Wednesday 19 February 2025 08:30:02 +0000 (0:00:01.278) 0:00:24.884 **** 2025-02-19 08:30:03.693319 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwA6a24OcKGnLirIO4CTzUJBDnZlRie0E1ogLfe7y/5VNOkJZ+jw/+UI58xfZFN04gPgilzGyJcp28nr03vf/OMjyLqW7E+0NyNyvnrjsU2Au4bxswK5ULLFy3bVVnDoFFeFURqqee87z3KHJP8qdEX4I2gIE1S/ItTLmgTlJhxtZllhOHTHTvrjNlJNX6bS8Y1XvazlknI4XnV7xOC6Pgt3EZUp++gpLUDV11YjwFa/htoUPpYqoGtfj7Zibk2jwsgvhghqVJZOXQx3LlddB0Eclc1qtLINH7bUMfRObOEGru4ZQMGMK0T0AaNdsFq0EBvnGcAK5YYdxJOjdLrfnmI2uwyRBm0Sc2mFy7wmuBC4FTysmuhvE1VbL8rN6vLaopUrrmB/EzmFCla/t2FMH/d9GgE+npQAsTrtvRy7Azkr59UkK2jFxX6jYHXtmPeFrw0cQ5EliUeuRPHZn9j+KcL5A43zTYTshKCDpEdYCGmO0HwewwEaEtEgVosSFRGX8=) 2025-02-19 08:30:03.693856 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHLoBPt1AEALALHV36ELIhiurBiTaKYlQc/KvpwgSFTLC5iqofQYRv7LAfn3TGDaQVJbzC3pZh0JKSs7Cnv61wE=) 2025-02-19 08:30:03.696033 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDhQhLe7YyC1DGzDoJd032SdiI1wnZBrbP/dSeV48DOD) 2025-02-19 08:30:03.696184 | orchestrator | 2025-02-19 08:30:03.696535 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:30:03.696869 | orchestrator | Wednesday 19 February 2025 08:30:03 +0000 (0:00:01.140) 0:00:26.025 **** 2025-02-19 08:30:04.874380 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9D2OW9/tbKqXOLbsxL1UAd36ltOI01WyVs46aBIMT/5yiGzGKvEFdrbJq2zEDw7Can88c3qW0gkcwNjm6rP7/o7zRbdSo+EDBBxDFd+4CJaAS2sUMW3tgQPo+ieRBVGaPyngPkTVoADmXlhvjdlDEhqxe8eNWQ9K4lUBYToXv/OIU872XsY+5jT5VPqsNZm6ZaQ4Od4+YDvjkeDBuBAckMaZMol5X2O3jPBOW/QMNa2c26qF0OR5aj3Js7dVzvgmJ1SSjm5Rt6FGjGfgngl7FKJD8wR22xLVpNwHkrQtZ7DVAFgHUIUZD+V9limLM+oohnPMpMK+pUB4tnw+vQ7PFK38Di9Kq31nBQgUnRRfmQa8eJYQXOk9tJZC94xEsH2LNm734hxpFAZH0t97i8GLWgQl5LV2hk9wfAb1biBIrNB2VxDA95peOCIcLY1xkoWMemjCeuJ+LW607EF4sNoQB/PYSS83kK7fLIpjFH4B/0cnyAKC6dkmupabdnJpNue8=) 2025-02-19 08:30:04.875908 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBE+7vsUF/SnAGPSVF1xAi3+0w3fnSVYcNqcbn8D9S/w2PME225KsZq4qskBNPhQNSf3niRpW8p4wHyARjGRTUI=) 2025-02-19 08:30:04.876485 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINGqfrxSucFMogLUrIPGy/ap4qIIZ7IKNPl44dsMn/8B) 2025-02-19 08:30:04.877518 | orchestrator | 2025-02-19 08:30:04.879344 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-19 08:30:04.880237 | orchestrator | Wednesday 19 February 2025 08:30:04 +0000 (0:00:01.179) 0:00:27.205 **** 2025-02-19 08:30:05.974259 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr7CLCd7bqATscqhIbDpw5o9ssWjIgFTVzvD8pf/m2DQbpTOfSFXve/12RJ8C/+whk63YJBigOIA4h0CRZtKVxEO23Z8vBmumjAy/6OETN9uIu22NuOjro6o0waapu1eupwtqqrfigrmzcprI0Il0BMHsIOiaxcnTdOyci0h7d5H5+5irACaJ8qREhyk2MJADOXYnt1MD6eWLVgzKOYLlvutvaUbxw7PpdtJcJfqgiNdFZS1xrBTXVMElI5yaVjXWQGJJAParHfzYiX9tXFpmEfE5QdIn2NYJULfb2GamJaJVVZHwq8U5rX++iRfUohr37iatOcRnHz9UjBy+H9JqaSFE+qD27mVa6xiy3EN88WJ3FTWZVi0Gj+gbIpwFvutMBJ8H0GFjMUwwEowJMfQTbRAzmtOjzV672RUx8yWksjeEbbO2Eo2D/xc33JVGSplAp32ingAp037kj38dthsCXqdqrLtykqFTQkybKG25afz3jj2HgAenbzFezHc2HtsU=) 2025-02-19 08:30:05.974949 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOLSJ/nk8Rs4YON994HnllFO3MSiqXmsrxOd9ljBuqNzj05VMbb0qXAVTO72ujfKm2AvPeC9GZDMt0jymTKTRy4=) 2025-02-19 08:30:05.975645 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID17eWjoClL4k0xU2b+coA7DiOV1WLX93RcJfIXA/PaF) 2025-02-19 08:30:05.976794 | orchestrator | 2025-02-19 08:30:05.977013 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-02-19 08:30:05.977654 | orchestrator | Wednesday 19 February 2025 08:30:05 +0000 (0:00:01.099) 0:00:28.304 **** 2025-02-19 08:30:06.164712 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-02-19 08:30:06.165177 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-02-19 08:30:06.166990 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-02-19 08:30:06.167099 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-02-19 08:30:06.168707 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-02-19 08:30:06.169491 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-02-19 08:30:06.169894 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-02-19 08:30:06.170395 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:30:06.170949 | orchestrator | 2025-02-19 08:30:06.171532 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-02-19 08:30:06.171911 | orchestrator | Wednesday 19 February 2025 08:30:06 +0000 (0:00:00.192) 0:00:28.497 **** 2025-02-19 08:30:06.343372 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:30:06.343504 | orchestrator | 2025-02-19 08:30:06.343525 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-02-19 08:30:06.345232 | orchestrator | Wednesday 19 February 2025 08:30:06 +0000 (0:00:00.177) 0:00:28.675 **** 2025-02-19 08:30:06.422400 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:30:06.423034 | orchestrator | 2025-02-19 08:30:06.424777 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-02-19 08:30:07.038398 | orchestrator | Wednesday 19 February 2025 08:30:06 +0000 (0:00:00.079) 0:00:28.754 **** 2025-02-19 08:30:07.038534 | orchestrator | changed: [testbed-manager] 2025-02-19 08:30:07.038735 | orchestrator | 2025-02-19 08:30:07.038764 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:30:07.038781 | orchestrator | 2025-02-19 08:30:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:30:07.038805 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-19 08:30:07.039324 | orchestrator | 2025-02-19 08:30:07 | INFO  | Please wait and do not abort execution. 2025-02-19 08:30:07.039355 | orchestrator | 2025-02-19 08:30:07.039923 | orchestrator | 2025-02-19 08:30:07.040174 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:30:07.042187 | orchestrator | Wednesday 19 February 2025 08:30:07 +0000 (0:00:00.612) 0:00:29.367 **** 2025-02-19 08:30:07.042858 | orchestrator | =============================================================================== 2025-02-19 08:30:07.043460 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.99s 2025-02-19 08:30:07.044085 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.51s 2025-02-19 08:30:07.044402 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.28s 2025-02-19 08:30:07.045004 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.25s 2025-02-19 08:30:07.045373 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-02-19 08:30:07.046091 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-02-19 08:30:07.046204 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-02-19 08:30:07.047016 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-02-19 08:30:07.047124 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-02-19 08:30:07.047252 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-02-19 08:30:07.047933 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-02-19 08:30:07.048075 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-02-19 08:30:07.048515 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-02-19 08:30:07.048734 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-02-19 08:30:07.049057 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-02-19 08:30:07.049303 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-02-19 08:30:07.049680 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.61s 2025-02-19 08:30:07.049917 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.22s 2025-02-19 08:30:07.050208 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-02-19 08:30:07.050711 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2025-02-19 08:30:07.490748 | orchestrator | + osism apply squid 2025-02-19 08:30:08.975597 | orchestrator | 2025-02-19 08:30:08 | INFO  | Task 2efe14b5-507f-4fe6-aca8-17695b359f48 (squid) was prepared for execution. 2025-02-19 08:30:12.213483 | orchestrator | 2025-02-19 08:30:08 | INFO  | It takes a moment until task 2efe14b5-507f-4fe6-aca8-17695b359f48 (squid) has been started and output is visible here. 2025-02-19 08:30:12.213676 | orchestrator | 2025-02-19 08:30:12.214954 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-02-19 08:30:12.215745 | orchestrator | 2025-02-19 08:30:12.215783 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-02-19 08:30:12.215804 | orchestrator | Wednesday 19 February 2025 08:30:12 +0000 (0:00:00.122) 0:00:00.122 **** 2025-02-19 08:30:12.309303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-02-19 08:30:12.309673 | orchestrator | 2025-02-19 08:30:12.310893 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-02-19 08:30:12.311264 | orchestrator | Wednesday 19 February 2025 08:30:12 +0000 (0:00:00.098) 0:00:00.221 **** 2025-02-19 08:30:13.874688 | orchestrator | ok: [testbed-manager] 2025-02-19 08:30:13.874840 | orchestrator | 2025-02-19 08:30:13.875679 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-02-19 08:30:13.875945 | orchestrator | Wednesday 19 February 2025 08:30:13 +0000 (0:00:01.564) 0:00:01.785 **** 2025-02-19 08:30:15.177740 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-02-19 08:30:15.182163 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-02-19 08:30:15.183192 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-02-19 08:30:15.183308 | orchestrator | 2025-02-19 08:30:15.183329 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-02-19 08:30:15.183359 | orchestrator | Wednesday 19 February 2025 08:30:15 +0000 (0:00:01.303) 0:00:03.088 **** 2025-02-19 08:30:16.345918 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-02-19 08:30:16.347200 | orchestrator | 2025-02-19 08:30:16.347238 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-02-19 08:30:16.347696 | orchestrator | Wednesday 19 February 2025 08:30:16 +0000 (0:00:01.168) 0:00:04.257 **** 2025-02-19 08:30:16.729624 | orchestrator | ok: [testbed-manager] 2025-02-19 08:30:16.730265 | orchestrator | 2025-02-19 08:30:16.730328 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-02-19 08:30:16.730877 | orchestrator | Wednesday 19 February 2025 08:30:16 +0000 (0:00:00.384) 0:00:04.641 **** 2025-02-19 08:30:17.819090 | orchestrator | changed: [testbed-manager] 2025-02-19 08:30:17.819623 | orchestrator | 2025-02-19 08:30:17.819657 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-02-19 08:30:17.820184 | orchestrator | Wednesday 19 February 2025 08:30:17 +0000 (0:00:01.088) 0:00:05.730 **** 2025-02-19 08:30:45.456360 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-02-19 08:30:45.457531 | orchestrator | ok: [testbed-manager] 2025-02-19 08:30:45.458062 | orchestrator | 2025-02-19 08:30:45.459515 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-02-19 08:30:45.460034 | orchestrator | Wednesday 19 February 2025 08:30:45 +0000 (0:00:27.633) 0:00:33.364 **** 2025-02-19 08:30:57.899450 | orchestrator | changed: [testbed-manager] 2025-02-19 08:30:57.899610 | orchestrator | 2025-02-19 08:30:57.899626 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-02-19 08:30:57.902749 | orchestrator | Wednesday 19 February 2025 08:30:57 +0000 (0:00:12.443) 0:00:45.807 **** 2025-02-19 08:31:57.991405 | orchestrator | Pausing for 60 seconds 2025-02-19 08:31:58.053472 | orchestrator | changed: [testbed-manager] 2025-02-19 08:31:58.053631 | orchestrator | 2025-02-19 08:31:58.053655 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-02-19 08:31:58.053671 | orchestrator | Wednesday 19 February 2025 08:31:57 +0000 (0:01:00.090) 0:01:45.898 **** 2025-02-19 08:31:58.053704 | orchestrator | ok: [testbed-manager] 2025-02-19 08:31:58.054070 | orchestrator | 2025-02-19 08:31:58.058265 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-02-19 08:31:58.705187 | orchestrator | Wednesday 19 February 2025 08:31:58 +0000 (0:00:00.067) 0:01:45.965 **** 2025-02-19 08:31:58.705327 | orchestrator | changed: [testbed-manager] 2025-02-19 08:31:58.705917 | orchestrator | 2025-02-19 08:31:58.705957 | orchestrator | 2025-02-19 08:31:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:31:58.706117 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:31:58.706356 | orchestrator | 2025-02-19 08:31:58 | INFO  | Please wait and do not abort execution. 2025-02-19 08:31:58.706998 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:31:58.707704 | orchestrator | 2025-02-19 08:31:58.708198 | orchestrator | 2025-02-19 08:31:58.709577 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:31:58.710778 | orchestrator | Wednesday 19 February 2025 08:31:58 +0000 (0:00:00.652) 0:01:46.617 **** 2025-02-19 08:31:58.711173 | orchestrator | =============================================================================== 2025-02-19 08:31:58.711982 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-02-19 08:31:58.712915 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 27.63s 2025-02-19 08:31:58.713183 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.44s 2025-02-19 08:31:58.714192 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.56s 2025-02-19 08:31:58.714339 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.30s 2025-02-19 08:31:58.715042 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.17s 2025-02-19 08:31:58.715537 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.09s 2025-02-19 08:31:58.715845 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2025-02-19 08:31:58.717631 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-02-19 08:31:58.717820 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-02-19 08:31:58.718316 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-02-19 08:31:59.152733 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-19 08:31:59.153414 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-19 08:31:59.153465 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-02-19 08:32:00.656070 | orchestrator | 2025-02-19 08:32:00 | INFO  | Task 45fb67ef-d875-43da-a698-6ad621d31c4a (operator) was prepared for execution. 2025-02-19 08:32:03.757930 | orchestrator | 2025-02-19 08:32:00 | INFO  | It takes a moment until task 45fb67ef-d875-43da-a698-6ad621d31c4a (operator) has been started and output is visible here. 2025-02-19 08:32:03.758144 | orchestrator | 2025-02-19 08:32:03.762854 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-02-19 08:32:03.762909 | orchestrator | 2025-02-19 08:32:03.763708 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-19 08:32:03.763746 | orchestrator | Wednesday 19 February 2025 08:32:03 +0000 (0:00:00.099) 0:00:00.100 **** 2025-02-19 08:32:07.541362 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:07.541969 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:07.542058 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:32:07.542687 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:07.543105 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:32:07.543513 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:32:07.544332 | orchestrator | 2025-02-19 08:32:07.544816 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-02-19 08:32:07.545191 | orchestrator | Wednesday 19 February 2025 08:32:07 +0000 (0:00:03.787) 0:00:03.887 **** 2025-02-19 08:32:08.351406 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:08.352324 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:08.352735 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:32:08.352780 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:08.354466 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:32:08.355242 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:32:08.355300 | orchestrator | 2025-02-19 08:32:08.355741 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-02-19 08:32:08.356915 | orchestrator | 2025-02-19 08:32:08.357824 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-02-19 08:32:08.358216 | orchestrator | Wednesday 19 February 2025 08:32:08 +0000 (0:00:00.809) 0:00:04.697 **** 2025-02-19 08:32:08.426111 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:32:08.455776 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:32:08.480230 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:32:08.526229 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:08.526451 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:08.526469 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:08.526667 | orchestrator | 2025-02-19 08:32:08.529257 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-02-19 08:32:08.530934 | orchestrator | Wednesday 19 February 2025 08:32:08 +0000 (0:00:00.175) 0:00:04.872 **** 2025-02-19 08:32:08.585265 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:32:08.632187 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:32:08.663344 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:32:08.720782 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:08.721223 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:08.721447 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:08.722301 | orchestrator | 2025-02-19 08:32:08.722531 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-02-19 08:32:08.722869 | orchestrator | Wednesday 19 February 2025 08:32:08 +0000 (0:00:00.194) 0:00:05.067 **** 2025-02-19 08:32:09.357869 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:32:09.358282 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:09.358330 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:09.358933 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:09.360650 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:32:09.362000 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:32:09.362346 | orchestrator | 2025-02-19 08:32:09.362382 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-02-19 08:32:10.179512 | orchestrator | Wednesday 19 February 2025 08:32:09 +0000 (0:00:00.636) 0:00:05.704 **** 2025-02-19 08:32:10.179702 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:32:10.180011 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:10.180980 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:10.182102 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:10.182720 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:32:10.183207 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:32:10.185403 | orchestrator | 2025-02-19 08:32:10.185882 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-02-19 08:32:10.186614 | orchestrator | Wednesday 19 February 2025 08:32:10 +0000 (0:00:00.816) 0:00:06.520 **** 2025-02-19 08:32:11.435269 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-02-19 08:32:11.437442 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-02-19 08:32:11.438619 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-02-19 08:32:11.438663 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-02-19 08:32:11.439354 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-02-19 08:32:11.440264 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-02-19 08:32:11.440745 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-02-19 08:32:11.442175 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-02-19 08:32:11.443110 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-02-19 08:32:11.444006 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-02-19 08:32:11.444884 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-02-19 08:32:11.445769 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-02-19 08:32:11.446673 | orchestrator | 2025-02-19 08:32:11.447212 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-02-19 08:32:11.448076 | orchestrator | Wednesday 19 February 2025 08:32:11 +0000 (0:00:01.259) 0:00:07.780 **** 2025-02-19 08:32:12.709273 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:32:12.710134 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:12.710187 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:12.711090 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:32:12.712810 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:32:12.713775 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:12.714740 | orchestrator | 2025-02-19 08:32:12.715714 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-02-19 08:32:12.716629 | orchestrator | Wednesday 19 February 2025 08:32:12 +0000 (0:00:01.273) 0:00:09.053 **** 2025-02-19 08:32:13.859453 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-02-19 08:32:13.946591 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-02-19 08:32:13.946791 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-02-19 08:32:13.946845 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-02-19 08:32:13.946977 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-02-19 08:32:13.948191 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-02-19 08:32:13.948918 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-02-19 08:32:13.950901 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-02-19 08:32:13.951332 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-02-19 08:32:13.953256 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-02-19 08:32:13.954723 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-02-19 08:32:13.955778 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-02-19 08:32:13.956524 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-02-19 08:32:13.957092 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-02-19 08:32:13.957921 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-02-19 08:32:13.958678 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-02-19 08:32:13.959609 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-02-19 08:32:13.960171 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-02-19 08:32:13.960689 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-02-19 08:32:13.960951 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-02-19 08:32:13.962157 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-02-19 08:32:13.962737 | orchestrator | 2025-02-19 08:32:13.963381 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-02-19 08:32:13.963704 | orchestrator | Wednesday 19 February 2025 08:32:13 +0000 (0:00:01.238) 0:00:10.292 **** 2025-02-19 08:32:14.517760 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:14.517995 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:32:14.518832 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:32:14.519701 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:32:14.520216 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:14.520915 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:14.521557 | orchestrator | 2025-02-19 08:32:14.522067 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-02-19 08:32:14.522731 | orchestrator | Wednesday 19 February 2025 08:32:14 +0000 (0:00:00.571) 0:00:10.863 **** 2025-02-19 08:32:14.595974 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:32:14.624136 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:32:14.652410 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:32:14.716931 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:32:14.718323 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:32:14.719331 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:32:14.721163 | orchestrator | 2025-02-19 08:32:14.722222 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-02-19 08:32:14.723124 | orchestrator | Wednesday 19 February 2025 08:32:14 +0000 (0:00:00.199) 0:00:11.062 **** 2025-02-19 08:32:15.460359 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-02-19 08:32:15.460750 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-19 08:32:15.460794 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:32:15.461419 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:32:15.461754 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-19 08:32:15.462718 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:15.463027 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-19 08:32:15.463051 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:15.463070 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-02-19 08:32:15.464299 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:32:15.464514 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-19 08:32:15.464543 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:15.465130 | orchestrator | 2025-02-19 08:32:15.465889 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-02-19 08:32:15.466063 | orchestrator | Wednesday 19 February 2025 08:32:15 +0000 (0:00:00.742) 0:00:11.804 **** 2025-02-19 08:32:15.531110 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:32:15.555916 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:32:15.581207 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:32:15.625807 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:32:15.626506 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:32:15.627781 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:32:15.628404 | orchestrator | 2025-02-19 08:32:15.629307 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-02-19 08:32:15.630414 | orchestrator | Wednesday 19 February 2025 08:32:15 +0000 (0:00:00.167) 0:00:11.972 **** 2025-02-19 08:32:15.686504 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:32:15.714349 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:32:15.740013 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:32:15.824483 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:32:15.824931 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:32:15.828815 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:32:15.828962 | orchestrator | 2025-02-19 08:32:15.828975 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-02-19 08:32:15.828983 | orchestrator | Wednesday 19 February 2025 08:32:15 +0000 (0:00:00.198) 0:00:12.171 **** 2025-02-19 08:32:15.907382 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:32:15.924774 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:32:15.947177 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:32:15.990724 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:32:15.991525 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:32:15.991915 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:32:15.992977 | orchestrator | 2025-02-19 08:32:15.993896 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-02-19 08:32:15.994333 | orchestrator | Wednesday 19 February 2025 08:32:15 +0000 (0:00:00.165) 0:00:12.337 **** 2025-02-19 08:32:16.659644 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:32:16.663059 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:32:16.663461 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:32:16.664206 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:16.665741 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:16.665814 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:16.666670 | orchestrator | 2025-02-19 08:32:16.667691 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-02-19 08:32:16.668190 | orchestrator | Wednesday 19 February 2025 08:32:16 +0000 (0:00:00.668) 0:00:13.005 **** 2025-02-19 08:32:16.768166 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:32:16.798265 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:32:16.913016 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:32:16.913420 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:32:16.913465 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:32:16.913663 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:32:16.914285 | orchestrator | 2025-02-19 08:32:16.917213 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:32:16.918120 | orchestrator | 2025-02-19 08:32:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:32:16.918164 | orchestrator | 2025-02-19 08:32:16 | INFO  | Please wait and do not abort execution. 2025-02-19 08:32:16.918188 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 08:32:16.919533 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 08:32:16.920883 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 08:32:16.921779 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 08:32:16.922594 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 08:32:16.923366 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 08:32:16.924152 | orchestrator | 2025-02-19 08:32:16.925604 | orchestrator | 2025-02-19 08:32:16.926270 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:32:16.927425 | orchestrator | Wednesday 19 February 2025 08:32:16 +0000 (0:00:00.252) 0:00:13.257 **** 2025-02-19 08:32:16.928262 | orchestrator | =============================================================================== 2025-02-19 08:32:16.929161 | orchestrator | Gathering Facts --------------------------------------------------------- 3.79s 2025-02-19 08:32:16.930150 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2025-02-19 08:32:16.930913 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.26s 2025-02-19 08:32:16.931659 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.24s 2025-02-19 08:32:16.931884 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2025-02-19 08:32:16.932881 | orchestrator | Do not require tty for all users ---------------------------------------- 0.81s 2025-02-19 08:32:16.933525 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2025-02-19 08:32:16.933610 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-02-19 08:32:16.935909 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2025-02-19 08:32:16.936952 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-02-19 08:32:16.938514 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2025-02-19 08:32:16.939077 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-02-19 08:32:16.939856 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.20s 2025-02-19 08:32:16.940594 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2025-02-19 08:32:16.941424 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-02-19 08:32:16.942262 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-02-19 08:32:16.942876 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-02-19 08:32:17.374913 | orchestrator | + osism apply --environment custom facts 2025-02-19 08:32:18.794292 | orchestrator | 2025-02-19 08:32:18 | INFO  | Trying to run play facts in environment custom 2025-02-19 08:32:18.840829 | orchestrator | 2025-02-19 08:32:18 | INFO  | Task 4e50a9f5-4f99-4127-be21-481d81778e29 (facts) was prepared for execution. 2025-02-19 08:32:22.023268 | orchestrator | 2025-02-19 08:32:18 | INFO  | It takes a moment until task 4e50a9f5-4f99-4127-be21-481d81778e29 (facts) has been started and output is visible here. 2025-02-19 08:32:22.023389 | orchestrator | 2025-02-19 08:32:22.023516 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-02-19 08:32:22.024288 | orchestrator | 2025-02-19 08:32:22.025125 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-19 08:32:22.025679 | orchestrator | Wednesday 19 February 2025 08:32:22 +0000 (0:00:00.089) 0:00:00.089 **** 2025-02-19 08:32:23.436959 | orchestrator | ok: [testbed-manager] 2025-02-19 08:32:23.438254 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:32:23.438325 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:23.438345 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:32:23.441082 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:23.441773 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:23.442213 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:32:23.443321 | orchestrator | 2025-02-19 08:32:23.443826 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-02-19 08:32:23.444676 | orchestrator | Wednesday 19 February 2025 08:32:23 +0000 (0:00:01.413) 0:00:01.502 **** 2025-02-19 08:32:24.651726 | orchestrator | ok: [testbed-manager] 2025-02-19 08:32:24.652370 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:32:24.652856 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:32:24.653544 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:24.654876 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:24.655053 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:32:24.655207 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:24.655998 | orchestrator | 2025-02-19 08:32:24.656555 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-02-19 08:32:24.657058 | orchestrator | 2025-02-19 08:32:24.657487 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-19 08:32:24.657971 | orchestrator | Wednesday 19 February 2025 08:32:24 +0000 (0:00:01.217) 0:00:02.719 **** 2025-02-19 08:32:24.760075 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:24.760822 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:24.761078 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:24.764918 | orchestrator | 2025-02-19 08:32:24.918690 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-19 08:32:24.918814 | orchestrator | Wednesday 19 February 2025 08:32:24 +0000 (0:00:00.108) 0:00:02.828 **** 2025-02-19 08:32:24.918852 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:24.918970 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:24.919502 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:24.919749 | orchestrator | 2025-02-19 08:32:24.920391 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-19 08:32:24.920678 | orchestrator | Wednesday 19 February 2025 08:32:24 +0000 (0:00:00.159) 0:00:02.988 **** 2025-02-19 08:32:25.064118 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:25.064649 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:25.066226 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:25.067621 | orchestrator | 2025-02-19 08:32:25.068164 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-19 08:32:25.068194 | orchestrator | Wednesday 19 February 2025 08:32:25 +0000 (0:00:00.145) 0:00:03.133 **** 2025-02-19 08:32:25.187494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:32:25.187808 | orchestrator | 2025-02-19 08:32:25.189116 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-19 08:32:25.189848 | orchestrator | Wednesday 19 February 2025 08:32:25 +0000 (0:00:00.122) 0:00:03.255 **** 2025-02-19 08:32:25.639969 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:25.640367 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:25.641802 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:25.642269 | orchestrator | 2025-02-19 08:32:25.642950 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-19 08:32:25.643826 | orchestrator | Wednesday 19 February 2025 08:32:25 +0000 (0:00:00.451) 0:00:03.707 **** 2025-02-19 08:32:25.737801 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:32:25.738785 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:32:25.740000 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:32:25.742172 | orchestrator | 2025-02-19 08:32:25.742525 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-19 08:32:25.743186 | orchestrator | Wednesday 19 February 2025 08:32:25 +0000 (0:00:00.099) 0:00:03.806 **** 2025-02-19 08:32:26.742910 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:26.743735 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:26.743782 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:26.743808 | orchestrator | 2025-02-19 08:32:26.744676 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-19 08:32:26.745857 | orchestrator | Wednesday 19 February 2025 08:32:26 +0000 (0:00:01.000) 0:00:04.807 **** 2025-02-19 08:32:27.211301 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:27.211883 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:27.211952 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:27.212309 | orchestrator | 2025-02-19 08:32:27.213238 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-19 08:32:27.213924 | orchestrator | Wednesday 19 February 2025 08:32:27 +0000 (0:00:00.470) 0:00:05.278 **** 2025-02-19 08:32:28.396433 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:28.396953 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:28.397775 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:28.398161 | orchestrator | 2025-02-19 08:32:28.399163 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-19 08:32:28.399829 | orchestrator | Wednesday 19 February 2025 08:32:28 +0000 (0:00:01.185) 0:00:06.464 **** 2025-02-19 08:32:41.598247 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:41.705871 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:41.705984 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:41.706003 | orchestrator | 2025-02-19 08:32:41.706081 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-02-19 08:32:41.706099 | orchestrator | Wednesday 19 February 2025 08:32:41 +0000 (0:00:13.200) 0:00:19.664 **** 2025-02-19 08:32:41.706130 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:32:41.706209 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:32:41.706643 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:32:41.707729 | orchestrator | 2025-02-19 08:32:41.709653 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-02-19 08:32:41.709997 | orchestrator | Wednesday 19 February 2025 08:32:41 +0000 (0:00:00.110) 0:00:19.774 **** 2025-02-19 08:32:49.117189 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:32:49.117449 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:32:49.117534 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:32:49.117950 | orchestrator | 2025-02-19 08:32:49.118115 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-19 08:32:49.119556 | orchestrator | Wednesday 19 February 2025 08:32:49 +0000 (0:00:07.410) 0:00:27.184 **** 2025-02-19 08:32:49.539629 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:49.540610 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:49.541407 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:49.544011 | orchestrator | 2025-02-19 08:32:53.080104 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-02-19 08:32:53.080261 | orchestrator | Wednesday 19 February 2025 08:32:49 +0000 (0:00:00.421) 0:00:27.606 **** 2025-02-19 08:32:53.080298 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-02-19 08:32:53.080819 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-02-19 08:32:53.080852 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-02-19 08:32:53.082208 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-02-19 08:32:53.083374 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-02-19 08:32:53.084714 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-02-19 08:32:53.085936 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-02-19 08:32:53.086629 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-02-19 08:32:53.086726 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-02-19 08:32:53.087185 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-02-19 08:32:53.088025 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-02-19 08:32:53.088795 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-02-19 08:32:53.089293 | orchestrator | 2025-02-19 08:32:53.089773 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-19 08:32:53.090388 | orchestrator | Wednesday 19 February 2025 08:32:53 +0000 (0:00:03.540) 0:00:31.146 **** 2025-02-19 08:32:54.174903 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:54.175076 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:54.175773 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:54.176664 | orchestrator | 2025-02-19 08:32:54.177025 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-19 08:32:54.177837 | orchestrator | 2025-02-19 08:32:54.180337 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-19 08:32:54.181547 | orchestrator | Wednesday 19 February 2025 08:32:54 +0000 (0:00:01.096) 0:00:32.243 **** 2025-02-19 08:32:58.317737 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:32:58.317886 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:32:58.319665 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:32:58.319796 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:32:58.320318 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:32:58.320936 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:32:58.322204 | orchestrator | ok: [testbed-manager] 2025-02-19 08:32:58.322615 | orchestrator | 2025-02-19 08:32:58.323122 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:32:58.323842 | orchestrator | 2025-02-19 08:32:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:32:58.324031 | orchestrator | 2025-02-19 08:32:58 | INFO  | Please wait and do not abort execution. 2025-02-19 08:32:58.324745 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:32:58.325404 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:32:58.325993 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:32:58.326603 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:32:58.327302 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:32:58.327725 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:32:58.328379 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:32:58.329045 | orchestrator | 2025-02-19 08:32:58.329738 | orchestrator | 2025-02-19 08:32:58.330067 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:32:58.330483 | orchestrator | Wednesday 19 February 2025 08:32:58 +0000 (0:00:04.142) 0:00:36.385 **** 2025-02-19 08:32:58.330818 | orchestrator | =============================================================================== 2025-02-19 08:32:58.331180 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.20s 2025-02-19 08:32:58.331888 | orchestrator | Install required packages (Debian) -------------------------------------- 7.41s 2025-02-19 08:32:58.332308 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.14s 2025-02-19 08:32:58.333632 | orchestrator | Copy fact files --------------------------------------------------------- 3.54s 2025-02-19 08:32:58.333892 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2025-02-19 08:32:58.334408 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2025-02-19 08:32:58.335090 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.19s 2025-02-19 08:32:58.335702 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.10s 2025-02-19 08:32:58.336227 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2025-02-19 08:32:58.336860 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-02-19 08:32:58.337171 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-02-19 08:32:58.337840 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-02-19 08:32:58.338148 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2025-02-19 08:32:58.338939 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.15s 2025-02-19 08:32:58.339131 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2025-02-19 08:32:58.339842 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-02-19 08:32:58.340100 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-02-19 08:32:58.340551 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-02-19 08:32:58.789736 | orchestrator | + osism apply bootstrap 2025-02-19 08:33:00.275752 | orchestrator | 2025-02-19 08:33:00 | INFO  | Task 60f20bd1-d1eb-46ad-8191-366c1147623b (bootstrap) was prepared for execution. 2025-02-19 08:33:03.667985 | orchestrator | 2025-02-19 08:33:00 | INFO  | It takes a moment until task 60f20bd1-d1eb-46ad-8191-366c1147623b (bootstrap) has been started and output is visible here. 2025-02-19 08:33:03.668207 | orchestrator | 2025-02-19 08:33:03.669424 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-02-19 08:33:03.669532 | orchestrator | 2025-02-19 08:33:03.669955 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-02-19 08:33:03.671240 | orchestrator | Wednesday 19 February 2025 08:33:03 +0000 (0:00:00.111) 0:00:00.111 **** 2025-02-19 08:33:03.758889 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:03.789196 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:03.822272 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:03.854999 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:03.935306 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:03.935639 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:03.936531 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:03.936966 | orchestrator | 2025-02-19 08:33:03.937475 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-19 08:33:03.941612 | orchestrator | 2025-02-19 08:33:03.941975 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-19 08:33:03.942818 | orchestrator | Wednesday 19 February 2025 08:33:03 +0000 (0:00:00.270) 0:00:00.382 **** 2025-02-19 08:33:08.558253 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:08.558908 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:08.558963 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:08.559938 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:08.561001 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:08.561652 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:08.561996 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:08.562144 | orchestrator | 2025-02-19 08:33:08.563485 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-02-19 08:33:08.565533 | orchestrator | 2025-02-19 08:33:08.566419 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-19 08:33:08.568898 | orchestrator | Wednesday 19 February 2025 08:33:08 +0000 (0:00:04.619) 0:00:05.002 **** 2025-02-19 08:33:08.664414 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-02-19 08:33:08.666157 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-02-19 08:33:08.689656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-02-19 08:33:08.689768 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-02-19 08:33:08.691839 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 08:33:08.718478 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-02-19 08:33:08.718625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 08:33:08.718702 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-02-19 08:33:08.719472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 08:33:08.746448 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-02-19 08:33:09.042780 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-02-19 08:33:09.042898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-19 08:33:09.044757 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-02-19 08:33:09.048092 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-19 08:33:09.049222 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:33:09.049255 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-02-19 08:33:09.049267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-19 08:33:09.049279 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-02-19 08:33:09.049290 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-19 08:33:09.049307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-19 08:33:09.050226 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-19 08:33:09.051069 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:33:09.051764 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-02-19 08:33:09.052715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 08:33:09.053094 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-19 08:33:09.054516 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-19 08:33:09.055037 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-19 08:33:09.055848 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-19 08:33:09.056612 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-02-19 08:33:09.057117 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 08:33:09.057824 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-19 08:33:09.058385 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-19 08:33:09.059457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-19 08:33:09.060379 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-19 08:33:09.061036 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-19 08:33:09.062158 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:33:09.062437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 08:33:09.064211 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-19 08:33:09.068775 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-19 08:33:09.070749 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-19 08:33:09.070782 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-19 08:33:09.070796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-19 08:33:09.070815 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-19 08:33:09.071871 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-19 08:33:09.072743 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-19 08:33:09.072773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-19 08:33:09.072815 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-19 08:33:09.073537 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:33:09.073563 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-19 08:33:09.073610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-19 08:33:09.074563 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:33:09.074835 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-19 08:33:09.075050 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:33:09.075423 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-19 08:33:09.076079 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-19 08:33:09.076295 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:33:09.076522 | orchestrator | 2025-02-19 08:33:09.076555 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-02-19 08:33:09.076862 | orchestrator | 2025-02-19 08:33:09.077183 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-02-19 08:33:09.077512 | orchestrator | Wednesday 19 February 2025 08:33:09 +0000 (0:00:00.487) 0:00:05.489 **** 2025-02-19 08:33:09.107842 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:09.135013 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:09.178927 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:09.205176 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:09.265656 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:09.265883 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:09.266808 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:09.267523 | orchestrator | 2025-02-19 08:33:09.268004 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-02-19 08:33:09.268990 | orchestrator | Wednesday 19 February 2025 08:33:09 +0000 (0:00:00.221) 0:00:05.710 **** 2025-02-19 08:33:10.733414 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:10.733982 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:10.734072 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:10.734923 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:10.735517 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:10.736090 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:10.736517 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:10.737276 | orchestrator | 2025-02-19 08:33:10.738159 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-02-19 08:33:10.738614 | orchestrator | Wednesday 19 February 2025 08:33:10 +0000 (0:00:01.467) 0:00:07.178 **** 2025-02-19 08:33:12.142240 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:12.142475 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:12.143257 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:12.144374 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:12.145073 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:12.145832 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:12.146432 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:12.147395 | orchestrator | 2025-02-19 08:33:12.148236 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-02-19 08:33:12.148453 | orchestrator | Wednesday 19 February 2025 08:33:12 +0000 (0:00:01.403) 0:00:08.582 **** 2025-02-19 08:33:12.428307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:33:12.428526 | orchestrator | 2025-02-19 08:33:12.429056 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-02-19 08:33:12.429477 | orchestrator | Wednesday 19 February 2025 08:33:12 +0000 (0:00:00.291) 0:00:08.874 **** 2025-02-19 08:33:14.706656 | orchestrator | changed: [testbed-manager] 2025-02-19 08:33:14.707438 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:33:14.710007 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:33:14.710943 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:33:14.710978 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:33:14.710990 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:33:14.711887 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:33:14.712682 | orchestrator | 2025-02-19 08:33:14.713438 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-02-19 08:33:14.713895 | orchestrator | Wednesday 19 February 2025 08:33:14 +0000 (0:00:02.276) 0:00:11.151 **** 2025-02-19 08:33:14.784987 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:33:15.015310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:33:15.015475 | orchestrator | 2025-02-19 08:33:15.015893 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-02-19 08:33:15.016411 | orchestrator | Wednesday 19 February 2025 08:33:15 +0000 (0:00:00.309) 0:00:11.460 **** 2025-02-19 08:33:16.121117 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:33:16.121688 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:33:16.122197 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:33:16.123131 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:33:16.124944 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:33:16.125028 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:33:16.125049 | orchestrator | 2025-02-19 08:33:16.125725 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-02-19 08:33:16.126096 | orchestrator | Wednesday 19 February 2025 08:33:16 +0000 (0:00:01.105) 0:00:12.566 **** 2025-02-19 08:33:16.203472 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:33:16.725030 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:33:16.725742 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:33:16.726704 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:33:16.727640 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:33:16.728412 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:33:16.729112 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:33:16.730757 | orchestrator | 2025-02-19 08:33:16.731075 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-02-19 08:33:16.731453 | orchestrator | Wednesday 19 February 2025 08:33:16 +0000 (0:00:00.604) 0:00:13.170 **** 2025-02-19 08:33:16.830835 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:33:16.850722 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:33:16.880851 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:33:17.192699 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:33:17.193343 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:33:17.194287 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:33:17.194739 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:17.195756 | orchestrator | 2025-02-19 08:33:17.197021 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-02-19 08:33:17.197231 | orchestrator | Wednesday 19 February 2025 08:33:17 +0000 (0:00:00.468) 0:00:13.639 **** 2025-02-19 08:33:17.280530 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:33:17.310917 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:33:17.336946 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:33:17.364032 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:33:17.454619 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:33:17.456478 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:33:17.457029 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:33:17.457740 | orchestrator | 2025-02-19 08:33:17.458010 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-02-19 08:33:17.458473 | orchestrator | Wednesday 19 February 2025 08:33:17 +0000 (0:00:00.260) 0:00:13.899 **** 2025-02-19 08:33:17.809971 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:33:17.811759 | orchestrator | 2025-02-19 08:33:18.180284 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-02-19 08:33:18.180408 | orchestrator | Wednesday 19 February 2025 08:33:17 +0000 (0:00:00.353) 0:00:14.253 **** 2025-02-19 08:33:18.180446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:33:18.181027 | orchestrator | 2025-02-19 08:33:18.181808 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-02-19 08:33:18.182836 | orchestrator | Wednesday 19 February 2025 08:33:18 +0000 (0:00:00.371) 0:00:14.625 **** 2025-02-19 08:33:19.660651 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:19.661407 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:19.661451 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:19.663197 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:19.663890 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:19.664119 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:19.664155 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:19.664500 | orchestrator | 2025-02-19 08:33:19.665001 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-02-19 08:33:19.759467 | orchestrator | Wednesday 19 February 2025 08:33:19 +0000 (0:00:01.477) 0:00:16.102 **** 2025-02-19 08:33:19.759559 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:33:19.787735 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:33:19.814559 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:33:19.846818 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:33:19.913378 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:33:19.914719 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:33:19.915003 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:33:19.915034 | orchestrator | 2025-02-19 08:33:19.916625 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-02-19 08:33:19.919656 | orchestrator | Wednesday 19 February 2025 08:33:19 +0000 (0:00:00.256) 0:00:16.359 **** 2025-02-19 08:33:20.517827 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:20.517994 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:20.518901 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:20.522701 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:20.522900 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:20.523341 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:20.523360 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:20.523372 | orchestrator | 2025-02-19 08:33:20.524077 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-02-19 08:33:20.524746 | orchestrator | Wednesday 19 February 2025 08:33:20 +0000 (0:00:00.603) 0:00:16.962 **** 2025-02-19 08:33:20.613679 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:33:20.711333 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:33:20.764161 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:33:20.855342 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:33:20.855682 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:33:20.856877 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:33:20.857282 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:33:20.858128 | orchestrator | 2025-02-19 08:33:20.858870 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-02-19 08:33:20.861670 | orchestrator | Wednesday 19 February 2025 08:33:20 +0000 (0:00:00.338) 0:00:17.301 **** 2025-02-19 08:33:21.469708 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:21.469965 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:33:21.471028 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:33:21.472003 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:33:21.473436 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:33:21.474302 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:33:21.475098 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:33:21.476174 | orchestrator | 2025-02-19 08:33:21.476790 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-02-19 08:33:21.478602 | orchestrator | Wednesday 19 February 2025 08:33:21 +0000 (0:00:00.613) 0:00:17.914 **** 2025-02-19 08:33:22.833181 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:22.833929 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:33:22.834847 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:33:22.835922 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:33:22.836764 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:33:22.838986 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:33:22.839155 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:33:22.840049 | orchestrator | 2025-02-19 08:33:22.840845 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-02-19 08:33:22.841115 | orchestrator | Wednesday 19 February 2025 08:33:22 +0000 (0:00:01.361) 0:00:19.276 **** 2025-02-19 08:33:24.167170 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:24.167355 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:24.168152 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:24.168248 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:24.168790 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:24.168994 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:24.169749 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:24.170412 | orchestrator | 2025-02-19 08:33:24.171022 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-02-19 08:33:24.171517 | orchestrator | Wednesday 19 February 2025 08:33:24 +0000 (0:00:01.335) 0:00:20.611 **** 2025-02-19 08:33:24.494426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:33:24.495016 | orchestrator | 2025-02-19 08:33:24.495767 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-02-19 08:33:24.496313 | orchestrator | Wednesday 19 February 2025 08:33:24 +0000 (0:00:00.328) 0:00:20.940 **** 2025-02-19 08:33:24.574163 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:33:25.869092 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:33:25.878990 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:33:25.879747 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:33:25.879799 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:33:25.879825 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:33:25.879849 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:33:25.879873 | orchestrator | 2025-02-19 08:33:25.879899 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-19 08:33:25.879936 | orchestrator | Wednesday 19 February 2025 08:33:25 +0000 (0:00:01.372) 0:00:22.312 **** 2025-02-19 08:33:25.944492 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:26.025239 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:26.049643 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:26.082732 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:26.150129 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:26.150328 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:26.151871 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:26.152453 | orchestrator | 2025-02-19 08:33:26.152980 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-19 08:33:26.153835 | orchestrator | Wednesday 19 February 2025 08:33:26 +0000 (0:00:00.282) 0:00:22.594 **** 2025-02-19 08:33:26.232190 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:26.294998 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:26.322070 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:26.411915 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:26.412504 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:26.413608 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:26.413985 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:26.414768 | orchestrator | 2025-02-19 08:33:26.415284 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-19 08:33:26.415744 | orchestrator | Wednesday 19 February 2025 08:33:26 +0000 (0:00:00.263) 0:00:22.858 **** 2025-02-19 08:33:26.540023 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:26.582433 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:26.621922 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:26.696164 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:26.696912 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:26.697297 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:26.697652 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:26.698807 | orchestrator | 2025-02-19 08:33:26.699388 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-19 08:33:26.699682 | orchestrator | Wednesday 19 February 2025 08:33:26 +0000 (0:00:00.284) 0:00:23.142 **** 2025-02-19 08:33:26.991805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:33:26.992666 | orchestrator | 2025-02-19 08:33:26.993192 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-19 08:33:27.000441 | orchestrator | Wednesday 19 February 2025 08:33:26 +0000 (0:00:00.294) 0:00:23.436 **** 2025-02-19 08:33:27.552439 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:27.552751 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:27.552804 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:27.552917 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:27.553698 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:27.554146 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:27.556473 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:27.556617 | orchestrator | 2025-02-19 08:33:27.558131 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-19 08:33:27.559545 | orchestrator | Wednesday 19 February 2025 08:33:27 +0000 (0:00:00.562) 0:00:23.998 **** 2025-02-19 08:33:27.634843 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:33:27.661848 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:33:27.689994 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:33:27.715678 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:33:27.788231 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:33:27.788409 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:33:27.789360 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:33:27.789401 | orchestrator | 2025-02-19 08:33:27.790002 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-19 08:33:27.790847 | orchestrator | Wednesday 19 February 2025 08:33:27 +0000 (0:00:00.233) 0:00:24.232 **** 2025-02-19 08:33:28.922558 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:28.922945 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:28.924388 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:28.925895 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:28.926543 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:33:28.927344 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:33:28.928030 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:33:28.928507 | orchestrator | 2025-02-19 08:33:28.929240 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-19 08:33:28.929799 | orchestrator | Wednesday 19 February 2025 08:33:28 +0000 (0:00:01.132) 0:00:25.364 **** 2025-02-19 08:33:29.533146 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:29.533374 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:29.533429 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:29.533828 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:29.534222 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:29.534543 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:29.534836 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:29.535192 | orchestrator | 2025-02-19 08:33:29.535684 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-19 08:33:29.535950 | orchestrator | Wednesday 19 February 2025 08:33:29 +0000 (0:00:00.613) 0:00:25.978 **** 2025-02-19 08:33:30.698832 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:30.699229 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:30.700300 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:30.702214 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:30.703110 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:33:30.703172 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:33:30.704001 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:33:30.705313 | orchestrator | 2025-02-19 08:33:30.705686 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-19 08:33:30.706642 | orchestrator | Wednesday 19 February 2025 08:33:30 +0000 (0:00:01.162) 0:00:27.141 **** 2025-02-19 08:33:44.867017 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:44.868917 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:44.869013 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:44.869031 | orchestrator | changed: [testbed-manager] 2025-02-19 08:33:44.869061 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:33:44.869643 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:33:44.870656 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:33:44.871187 | orchestrator | 2025-02-19 08:33:44.871647 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-02-19 08:33:44.872295 | orchestrator | Wednesday 19 February 2025 08:33:44 +0000 (0:00:14.164) 0:00:41.306 **** 2025-02-19 08:33:44.949132 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:44.982995 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:45.011856 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:45.043059 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:45.110256 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:45.111166 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:45.111893 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:45.112869 | orchestrator | 2025-02-19 08:33:45.113789 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-02-19 08:33:45.114745 | orchestrator | Wednesday 19 February 2025 08:33:45 +0000 (0:00:00.250) 0:00:41.556 **** 2025-02-19 08:33:45.192035 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:45.222317 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:45.252736 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:45.284954 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:45.357025 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:45.357657 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:45.359461 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:45.359612 | orchestrator | 2025-02-19 08:33:45.361138 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-02-19 08:33:45.361606 | orchestrator | Wednesday 19 February 2025 08:33:45 +0000 (0:00:00.246) 0:00:41.802 **** 2025-02-19 08:33:45.435423 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:45.466097 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:45.494788 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:45.524337 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:45.592537 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:45.593245 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:45.593343 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:45.594254 | orchestrator | 2025-02-19 08:33:45.594923 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-02-19 08:33:45.595283 | orchestrator | Wednesday 19 February 2025 08:33:45 +0000 (0:00:00.236) 0:00:42.039 **** 2025-02-19 08:33:45.893670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:33:45.893826 | orchestrator | 2025-02-19 08:33:45.893849 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-02-19 08:33:45.894272 | orchestrator | Wednesday 19 February 2025 08:33:45 +0000 (0:00:00.298) 0:00:42.338 **** 2025-02-19 08:33:47.978760 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:47.979219 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:47.979304 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:47.979521 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:47.980388 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:47.980780 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:47.982687 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:47.982721 | orchestrator | 2025-02-19 08:33:47.982816 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-02-19 08:33:47.982987 | orchestrator | Wednesday 19 February 2025 08:33:47 +0000 (0:00:02.082) 0:00:44.421 **** 2025-02-19 08:33:49.146128 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:33:49.148847 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:33:49.149707 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:33:49.149740 | orchestrator | changed: [testbed-manager] 2025-02-19 08:33:49.149757 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:33:49.149811 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:33:49.149881 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:33:49.150600 | orchestrator | 2025-02-19 08:33:49.151107 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-02-19 08:33:49.151761 | orchestrator | Wednesday 19 February 2025 08:33:49 +0000 (0:00:01.169) 0:00:45.590 **** 2025-02-19 08:33:50.064765 | orchestrator | ok: [testbed-manager] 2025-02-19 08:33:50.065000 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:33:50.065341 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:33:50.065378 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:33:50.065626 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:33:50.065879 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:33:50.066192 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:33:50.067210 | orchestrator | 2025-02-19 08:33:50.067800 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-02-19 08:33:50.068314 | orchestrator | Wednesday 19 February 2025 08:33:50 +0000 (0:00:00.919) 0:00:46.509 **** 2025-02-19 08:33:50.410235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:33:50.411220 | orchestrator | 2025-02-19 08:33:50.411356 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-02-19 08:33:50.411403 | orchestrator | Wednesday 19 February 2025 08:33:50 +0000 (0:00:00.346) 0:00:46.856 **** 2025-02-19 08:33:51.497008 | orchestrator | changed: [testbed-manager] 2025-02-19 08:33:51.497834 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:33:51.498816 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:33:51.499077 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:33:51.500655 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:33:51.500960 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:33:51.500987 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:33:51.501731 | orchestrator | 2025-02-19 08:33:51.502334 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-02-19 08:33:51.502812 | orchestrator | Wednesday 19 February 2025 08:33:51 +0000 (0:00:01.084) 0:00:47.940 **** 2025-02-19 08:33:51.573853 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:33:51.598751 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:33:51.627513 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:33:51.660425 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:33:51.835928 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:33:51.837164 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:33:51.837243 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:33:51.838156 | orchestrator | 2025-02-19 08:33:51.839010 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-02-19 08:33:51.839876 | orchestrator | Wednesday 19 February 2025 08:33:51 +0000 (0:00:00.340) 0:00:48.280 **** 2025-02-19 08:34:05.230166 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:34:05.230345 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:34:05.230368 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:34:05.230383 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:34:05.230398 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:34:05.230418 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:34:05.230750 | orchestrator | changed: [testbed-manager] 2025-02-19 08:34:05.231215 | orchestrator | 2025-02-19 08:34:05.232716 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-02-19 08:34:05.234250 | orchestrator | Wednesday 19 February 2025 08:34:05 +0000 (0:00:13.392) 0:01:01.673 **** 2025-02-19 08:34:06.727752 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:34:06.728448 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:34:06.729665 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:34:06.730899 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:34:06.732625 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:34:06.733356 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:34:06.734141 | orchestrator | ok: [testbed-manager] 2025-02-19 08:34:06.734822 | orchestrator | 2025-02-19 08:34:06.735169 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-02-19 08:34:06.735831 | orchestrator | Wednesday 19 February 2025 08:34:06 +0000 (0:00:01.497) 0:01:03.170 **** 2025-02-19 08:34:07.734237 | orchestrator | ok: [testbed-manager] 2025-02-19 08:34:07.740551 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:34:07.741070 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:34:07.741200 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:34:07.742106 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:34:07.749068 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:34:07.749735 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:34:07.749785 | orchestrator | 2025-02-19 08:34:07.752708 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-02-19 08:34:07.752818 | orchestrator | Wednesday 19 February 2025 08:34:07 +0000 (0:00:01.007) 0:01:04.177 **** 2025-02-19 08:34:07.792059 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-02-19 08:34:07.814447 | orchestrator | ok: [testbed-manager] 2025-02-19 08:34:07.859714 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:34:07.890318 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:34:07.927167 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:34:08.010264 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:34:08.010744 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:34:08.010790 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:34:08.011653 | orchestrator | 2025-02-19 08:34:08.012374 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-02-19 08:34:08.012779 | orchestrator | Wednesday 19 February 2025 08:34:08 +0000 (0:00:00.278) 0:01:04.456 **** 2025-02-19 08:34:08.096149 | orchestrator | ok: [testbed-manager] 2025-02-19 08:34:08.126284 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:34:08.155505 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:34:08.197166 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:34:08.274170 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:34:08.275501 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:34:08.277521 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:34:08.635731 | orchestrator | 2025-02-19 08:34:08.635886 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-02-19 08:34:08.635908 | orchestrator | Wednesday 19 February 2025 08:34:08 +0000 (0:00:00.265) 0:01:04.721 **** 2025-02-19 08:34:08.635942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:34:08.636025 | orchestrator | 2025-02-19 08:34:08.636891 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-02-19 08:34:08.639818 | orchestrator | Wednesday 19 February 2025 08:34:08 +0000 (0:00:00.359) 0:01:05.081 **** 2025-02-19 08:34:10.529610 | orchestrator | ok: [testbed-manager] 2025-02-19 08:34:10.529797 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:34:10.532472 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:34:10.535424 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:34:10.535494 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:34:10.535523 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:34:10.536764 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:34:10.536943 | orchestrator | 2025-02-19 08:34:10.538117 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-02-19 08:34:10.538518 | orchestrator | Wednesday 19 February 2025 08:34:10 +0000 (0:00:01.892) 0:01:06.973 **** 2025-02-19 08:34:11.146163 | orchestrator | changed: [testbed-manager] 2025-02-19 08:34:11.146313 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:34:11.147789 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:34:11.148847 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:34:11.149636 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:34:11.150834 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:34:11.151819 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:34:11.152171 | orchestrator | 2025-02-19 08:34:11.152897 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-02-19 08:34:11.153699 | orchestrator | Wednesday 19 February 2025 08:34:11 +0000 (0:00:00.616) 0:01:07.590 **** 2025-02-19 08:34:11.269231 | orchestrator | ok: [testbed-manager] 2025-02-19 08:34:11.298367 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:34:11.332026 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:34:11.358431 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:34:11.432675 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:34:11.434333 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:34:11.435600 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:34:11.436590 | orchestrator | 2025-02-19 08:34:11.438304 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-02-19 08:34:11.439899 | orchestrator | Wednesday 19 February 2025 08:34:11 +0000 (0:00:00.286) 0:01:07.876 **** 2025-02-19 08:34:12.695227 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:34:12.695411 | orchestrator | ok: [testbed-manager] 2025-02-19 08:34:12.695439 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:34:12.695966 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:34:12.696637 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:34:12.698674 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:34:12.702073 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:34:12.702108 | orchestrator | 2025-02-19 08:34:12.702894 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-02-19 08:34:12.704670 | orchestrator | Wednesday 19 February 2025 08:34:12 +0000 (0:00:01.260) 0:01:09.137 **** 2025-02-19 08:34:14.602811 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:34:14.603171 | orchestrator | changed: [testbed-manager] 2025-02-19 08:34:14.603213 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:34:14.605005 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:34:14.605885 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:34:14.607037 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:34:14.608035 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:34:14.609332 | orchestrator | 2025-02-19 08:34:14.610006 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-02-19 08:34:14.611015 | orchestrator | Wednesday 19 February 2025 08:34:14 +0000 (0:00:01.908) 0:01:11.046 **** 2025-02-19 08:34:17.382937 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:34:17.383066 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:34:17.384067 | orchestrator | ok: [testbed-manager] 2025-02-19 08:34:17.384955 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:34:17.385835 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:34:17.387590 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:34:17.388806 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:34:17.389447 | orchestrator | 2025-02-19 08:34:17.390068 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-02-19 08:34:17.390921 | orchestrator | Wednesday 19 February 2025 08:34:17 +0000 (0:00:02.780) 0:01:13.826 **** 2025-02-19 08:34:53.859651 | orchestrator | ok: [testbed-manager] 2025-02-19 08:34:53.860372 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:34:53.860421 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:34:53.861318 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:34:53.862427 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:34:53.863062 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:34:53.863500 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:34:53.864038 | orchestrator | 2025-02-19 08:34:53.864693 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-02-19 08:34:53.865264 | orchestrator | Wednesday 19 February 2025 08:34:53 +0000 (0:00:36.473) 0:01:50.300 **** 2025-02-19 08:36:02.858007 | orchestrator | changed: [testbed-manager] 2025-02-19 08:36:02.858639 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:36:02.858715 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:36:02.860019 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:36:02.860170 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:36:02.860251 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:36:02.860941 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:36:02.861316 | orchestrator | 2025-02-19 08:36:02.862102 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-02-19 08:36:02.862700 | orchestrator | Wednesday 19 February 2025 08:36:02 +0000 (0:01:08.997) 0:02:59.297 **** 2025-02-19 08:36:04.748750 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:36:04.748930 | orchestrator | ok: [testbed-manager] 2025-02-19 08:36:04.749420 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:36:04.750142 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:36:04.750940 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:36:04.751910 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:36:04.752510 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:36:04.753012 | orchestrator | 2025-02-19 08:36:04.753838 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-02-19 08:36:04.755968 | orchestrator | Wednesday 19 February 2025 08:36:04 +0000 (0:00:01.895) 0:03:01.193 **** 2025-02-19 08:36:17.879761 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:36:17.879942 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:36:17.879967 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:36:17.879982 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:36:17.879996 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:36:17.880011 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:36:17.880030 | orchestrator | changed: [testbed-manager] 2025-02-19 08:36:17.882137 | orchestrator | 2025-02-19 08:36:17.882417 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-02-19 08:36:17.886144 | orchestrator | Wednesday 19 February 2025 08:36:17 +0000 (0:00:13.123) 0:03:14.316 **** 2025-02-19 08:36:18.398294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-02-19 08:36:18.398510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-02-19 08:36:18.400200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-02-19 08:36:18.400420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-02-19 08:36:18.400971 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-02-19 08:36:18.404180 | orchestrator | 2025-02-19 08:36:18.406284 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-02-19 08:36:18.407024 | orchestrator | Wednesday 19 February 2025 08:36:18 +0000 (0:00:00.521) 0:03:14.838 **** 2025-02-19 08:36:18.455065 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-19 08:36:18.488115 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-19 08:36:18.490087 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:36:18.490188 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-19 08:36:18.521272 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:36:18.546346 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-19 08:36:18.546454 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:36:18.589004 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:36:19.173763 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-19 08:36:19.173986 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-19 08:36:19.174013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-19 08:36:19.174103 | orchestrator | 2025-02-19 08:36:19.175316 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-02-19 08:36:19.176473 | orchestrator | Wednesday 19 February 2025 08:36:19 +0000 (0:00:00.777) 0:03:15.615 **** 2025-02-19 08:36:19.257782 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-19 08:36:19.258400 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-19 08:36:19.258922 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-19 08:36:19.259144 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-19 08:36:19.259620 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-19 08:36:19.261323 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-19 08:36:19.261536 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-19 08:36:19.261871 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-19 08:36:19.262345 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-19 08:36:19.264562 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-19 08:36:19.264777 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-19 08:36:19.311974 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-19 08:36:19.312219 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:36:19.312938 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-19 08:36:19.316419 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-19 08:36:19.316868 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-19 08:36:19.316919 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-19 08:36:19.317171 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-19 08:36:19.320062 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-19 08:36:19.322867 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-19 08:36:19.345367 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-19 08:36:19.345489 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-19 08:36:19.345538 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-19 08:36:19.345929 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:36:19.346126 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-19 08:36:19.347138 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-19 08:36:19.347554 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-19 08:36:19.348249 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-19 08:36:19.350077 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-19 08:36:19.382461 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-19 08:36:19.382891 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-19 08:36:19.383563 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-19 08:36:19.387692 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-19 08:36:19.387919 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-19 08:36:19.387950 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-19 08:36:19.387972 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-19 08:36:19.388565 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-19 08:36:19.389240 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-19 08:36:19.389663 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-19 08:36:19.416470 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-19 08:36:19.416741 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-19 08:36:19.420330 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:36:25.382932 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-19 08:36:25.383081 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:36:25.387470 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-19 08:36:25.392985 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-19 08:36:25.393055 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-19 08:36:25.401208 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-19 08:36:25.401880 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-19 08:36:25.404877 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-19 08:36:25.405481 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-19 08:36:25.405970 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-19 08:36:25.406310 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-19 08:36:25.406798 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-19 08:36:25.407301 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-19 08:36:25.409770 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-19 08:36:25.409989 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-19 08:36:25.410063 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-19 08:36:25.412267 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-19 08:36:25.412429 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-19 08:36:25.415511 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-19 08:36:25.415940 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-19 08:36:25.416083 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-19 08:36:25.416235 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-19 08:36:25.416994 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-19 08:36:25.419885 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-19 08:36:25.420142 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-19 08:36:25.420892 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-19 08:36:25.421249 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-19 08:36:25.422209 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-19 08:36:25.422436 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-19 08:36:25.422986 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-19 08:36:25.423700 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-19 08:36:25.424005 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-19 08:36:25.424259 | orchestrator | 2025-02-19 08:36:25.424819 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-02-19 08:36:25.427096 | orchestrator | Wednesday 19 February 2025 08:36:25 +0000 (0:00:06.209) 0:03:21.825 **** 2025-02-19 08:36:26.906181 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-19 08:36:26.906803 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-19 08:36:26.909887 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-19 08:36:26.910071 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-19 08:36:26.910107 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-19 08:36:26.910717 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-19 08:36:26.911464 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-19 08:36:26.911958 | orchestrator | 2025-02-19 08:36:26.912619 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-02-19 08:36:26.913673 | orchestrator | Wednesday 19 February 2025 08:36:26 +0000 (0:00:01.522) 0:03:23.348 **** 2025-02-19 08:36:26.965180 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-19 08:36:27.021126 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:36:27.079421 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-19 08:36:28.415352 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-19 08:36:28.416564 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:36:28.417394 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:36:28.417413 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-19 08:36:28.418769 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:36:28.419680 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-19 08:36:28.420550 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-19 08:36:28.421232 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-19 08:36:28.422399 | orchestrator | 2025-02-19 08:36:28.423125 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-02-19 08:36:28.424466 | orchestrator | Wednesday 19 February 2025 08:36:28 +0000 (0:00:01.510) 0:03:24.859 **** 2025-02-19 08:36:28.481546 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-19 08:36:28.511711 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:36:28.566285 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-19 08:36:28.600726 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:36:28.601096 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-19 08:36:29.049607 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:36:29.050228 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-19 08:36:29.050916 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:36:29.051350 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-19 08:36:29.052159 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-19 08:36:29.052615 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-19 08:36:29.053291 | orchestrator | 2025-02-19 08:36:29.053605 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-02-19 08:36:29.055297 | orchestrator | Wednesday 19 February 2025 08:36:29 +0000 (0:00:00.635) 0:03:25.494 **** 2025-02-19 08:36:29.106508 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:36:29.140959 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:36:29.189559 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:36:29.220008 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:36:29.249415 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:36:29.393079 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:36:29.394139 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:36:29.395352 | orchestrator | 2025-02-19 08:36:29.397842 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-02-19 08:36:34.878352 | orchestrator | Wednesday 19 February 2025 08:36:29 +0000 (0:00:00.343) 0:03:25.838 **** 2025-02-19 08:36:34.878534 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:36:34.878723 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:36:34.878754 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:36:34.879018 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:36:34.879987 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:36:34.880838 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:36:34.881346 | orchestrator | ok: [testbed-manager] 2025-02-19 08:36:34.882273 | orchestrator | 2025-02-19 08:36:34.882964 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-02-19 08:36:34.883158 | orchestrator | Wednesday 19 February 2025 08:36:34 +0000 (0:00:05.483) 0:03:31.322 **** 2025-02-19 08:36:34.958966 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-02-19 08:36:35.003727 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-02-19 08:36:35.004480 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:36:35.005088 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-02-19 08:36:35.043321 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:36:35.043452 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-02-19 08:36:35.081057 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:36:35.130508 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-02-19 08:36:35.130918 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:36:35.131061 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-02-19 08:36:35.222725 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:36:35.223665 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:36:35.225943 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-02-19 08:36:35.226835 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:36:35.227760 | orchestrator | 2025-02-19 08:36:35.229150 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-02-19 08:36:35.229345 | orchestrator | Wednesday 19 February 2025 08:36:35 +0000 (0:00:00.344) 0:03:31.667 **** 2025-02-19 08:36:36.305345 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-02-19 08:36:36.308181 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-02-19 08:36:36.308830 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-02-19 08:36:36.309242 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-02-19 08:36:36.310215 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-02-19 08:36:36.310391 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-02-19 08:36:36.310988 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-02-19 08:36:36.311223 | orchestrator | 2025-02-19 08:36:36.312024 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-02-19 08:36:36.313566 | orchestrator | Wednesday 19 February 2025 08:36:36 +0000 (0:00:01.081) 0:03:32.748 **** 2025-02-19 08:36:36.943395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:36:36.943761 | orchestrator | 2025-02-19 08:36:36.943806 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-02-19 08:36:36.946951 | orchestrator | Wednesday 19 February 2025 08:36:36 +0000 (0:00:00.639) 0:03:33.387 **** 2025-02-19 08:36:38.396100 | orchestrator | ok: [testbed-manager] 2025-02-19 08:36:38.396272 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:36:38.396300 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:36:38.396571 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:36:38.397339 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:36:38.398883 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:36:38.400193 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:36:38.400706 | orchestrator | 2025-02-19 08:36:38.401942 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-02-19 08:36:38.402529 | orchestrator | Wednesday 19 February 2025 08:36:38 +0000 (0:00:01.452) 0:03:34.840 **** 2025-02-19 08:36:39.079505 | orchestrator | ok: [testbed-manager] 2025-02-19 08:36:39.081002 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:36:39.081098 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:36:39.081141 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:36:39.081856 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:36:39.081921 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:36:39.082418 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:36:39.084078 | orchestrator | 2025-02-19 08:36:39.084191 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-02-19 08:36:39.084473 | orchestrator | Wednesday 19 February 2025 08:36:39 +0000 (0:00:00.682) 0:03:35.523 **** 2025-02-19 08:36:39.760243 | orchestrator | changed: [testbed-manager] 2025-02-19 08:36:39.761497 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:36:39.764706 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:36:39.768098 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:36:39.769423 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:36:39.770693 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:36:39.771678 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:36:39.772678 | orchestrator | 2025-02-19 08:36:39.776197 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-02-19 08:36:39.776800 | orchestrator | Wednesday 19 February 2025 08:36:39 +0000 (0:00:00.673) 0:03:36.197 **** 2025-02-19 08:36:40.389997 | orchestrator | ok: [testbed-manager] 2025-02-19 08:36:40.391182 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:36:40.391852 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:36:40.392815 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:36:40.393069 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:36:40.394113 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:36:40.394953 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:36:40.395798 | orchestrator | 2025-02-19 08:36:40.396501 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-02-19 08:36:40.397500 | orchestrator | Wednesday 19 February 2025 08:36:40 +0000 (0:00:00.638) 0:03:36.835 **** 2025-02-19 08:36:41.391273 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739952537.716182, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.391505 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739952542.8016534, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.391695 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739952544.1851847, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.393315 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739952527.6229973, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.395892 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739952533.3642302, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.397132 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739952547.9173748, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.397787 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1739952548.0212443, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.398211 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739952561.3185, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.398700 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739952470.3900058, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.399534 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739952482.8042917, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.400267 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739952484.4145653, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.400310 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739952471.511328, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.400688 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739952484.9270205, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.401009 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1739952483.032228, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 08:36:41.401380 | orchestrator | 2025-02-19 08:36:41.401662 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-02-19 08:36:41.402117 | orchestrator | Wednesday 19 February 2025 08:36:41 +0000 (0:00:01.001) 0:03:37.836 **** 2025-02-19 08:36:42.622904 | orchestrator | changed: [testbed-manager] 2025-02-19 08:36:42.623080 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:36:42.623455 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:36:42.624062 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:36:42.626195 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:36:42.627367 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:36:42.627555 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:36:42.627655 | orchestrator | 2025-02-19 08:36:42.628114 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-02-19 08:36:42.628861 | orchestrator | Wednesday 19 February 2025 08:36:42 +0000 (0:00:01.225) 0:03:39.061 **** 2025-02-19 08:36:43.919242 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:36:43.920086 | orchestrator | changed: [testbed-manager] 2025-02-19 08:36:43.924078 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:36:43.924858 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:36:43.924967 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:36:43.924986 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:36:43.925017 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:36:43.925666 | orchestrator | 2025-02-19 08:36:43.926738 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-02-19 08:36:43.928537 | orchestrator | Wednesday 19 February 2025 08:36:43 +0000 (0:00:01.299) 0:03:40.361 **** 2025-02-19 08:36:45.194356 | orchestrator | changed: [testbed-manager] 2025-02-19 08:36:45.195508 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:36:45.195672 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:36:45.196749 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:36:45.199616 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:36:45.199696 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:36:45.200021 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:36:45.201213 | orchestrator | 2025-02-19 08:36:45.201826 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-02-19 08:36:45.201873 | orchestrator | Wednesday 19 February 2025 08:36:45 +0000 (0:00:01.276) 0:03:41.638 **** 2025-02-19 08:36:45.268098 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:36:45.348821 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:36:45.403072 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:36:45.440656 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:36:45.523520 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:36:45.525079 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:36:45.525945 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:36:45.529381 | orchestrator | 2025-02-19 08:36:46.448903 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-02-19 08:36:46.448992 | orchestrator | Wednesday 19 February 2025 08:36:45 +0000 (0:00:00.330) 0:03:41.968 **** 2025-02-19 08:36:46.449011 | orchestrator | ok: [testbed-manager] 2025-02-19 08:36:46.449843 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:36:46.450836 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:36:46.451667 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:36:46.452090 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:36:46.452841 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:36:46.453913 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:36:46.454153 | orchestrator | 2025-02-19 08:36:46.454793 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-02-19 08:36:46.455253 | orchestrator | Wednesday 19 February 2025 08:36:46 +0000 (0:00:00.924) 0:03:42.892 **** 2025-02-19 08:36:46.903534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:36:46.904647 | orchestrator | 2025-02-19 08:36:46.905359 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-02-19 08:36:46.906169 | orchestrator | Wednesday 19 February 2025 08:36:46 +0000 (0:00:00.454) 0:03:43.346 **** 2025-02-19 08:36:55.483903 | orchestrator | ok: [testbed-manager] 2025-02-19 08:36:55.487707 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:36:55.487775 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:36:55.488611 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:36:55.488646 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:36:55.489104 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:36:55.492115 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:36:55.492499 | orchestrator | 2025-02-19 08:36:55.493062 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-02-19 08:36:55.495970 | orchestrator | Wednesday 19 February 2025 08:36:55 +0000 (0:00:08.578) 0:03:51.925 **** 2025-02-19 08:36:56.798412 | orchestrator | ok: [testbed-manager] 2025-02-19 08:36:56.798676 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:36:56.799024 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:36:56.799558 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:36:56.803376 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:36:56.804324 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:36:56.805014 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:36:56.805411 | orchestrator | 2025-02-19 08:36:56.806106 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-02-19 08:36:56.808412 | orchestrator | Wednesday 19 February 2025 08:36:56 +0000 (0:00:01.313) 0:03:53.239 **** 2025-02-19 08:36:58.075993 | orchestrator | ok: [testbed-manager] 2025-02-19 08:36:58.077817 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:36:58.079152 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:36:58.079191 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:36:58.079210 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:36:58.079796 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:36:58.080623 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:36:58.080997 | orchestrator | 2025-02-19 08:36:58.081705 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-02-19 08:36:58.082280 | orchestrator | Wednesday 19 February 2025 08:36:58 +0000 (0:00:01.278) 0:03:54.518 **** 2025-02-19 08:36:58.604320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:36:58.604782 | orchestrator | 2025-02-19 08:36:58.605691 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-02-19 08:36:58.606656 | orchestrator | Wednesday 19 February 2025 08:36:58 +0000 (0:00:00.528) 0:03:55.047 **** 2025-02-19 08:37:07.393492 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:37:07.394912 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:37:07.394985 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:37:07.396512 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:37:07.398170 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:37:07.398823 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:37:07.399355 | orchestrator | changed: [testbed-manager] 2025-02-19 08:37:07.400285 | orchestrator | 2025-02-19 08:37:07.401112 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-02-19 08:37:07.402066 | orchestrator | Wednesday 19 February 2025 08:37:07 +0000 (0:00:08.790) 0:04:03.838 **** 2025-02-19 08:37:08.032491 | orchestrator | changed: [testbed-manager] 2025-02-19 08:37:08.033087 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:37:08.033148 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:37:08.033323 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:37:08.033782 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:37:08.037171 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:37:08.037747 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:37:08.038641 | orchestrator | 2025-02-19 08:37:08.038810 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-02-19 08:37:08.039534 | orchestrator | Wednesday 19 February 2025 08:37:08 +0000 (0:00:00.639) 0:04:04.477 **** 2025-02-19 08:37:09.191683 | orchestrator | changed: [testbed-manager] 2025-02-19 08:37:09.192242 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:37:09.192290 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:37:09.192900 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:37:09.194006 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:37:09.194417 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:37:09.195100 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:37:09.195660 | orchestrator | 2025-02-19 08:37:09.196201 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-02-19 08:37:09.197255 | orchestrator | Wednesday 19 February 2025 08:37:09 +0000 (0:00:01.158) 0:04:05.636 **** 2025-02-19 08:37:10.261568 | orchestrator | changed: [testbed-manager] 2025-02-19 08:37:10.262654 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:37:10.263767 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:37:10.264786 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:37:10.265346 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:37:10.266081 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:37:10.267336 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:37:10.267713 | orchestrator | 2025-02-19 08:37:10.267841 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-02-19 08:37:10.268374 | orchestrator | Wednesday 19 February 2025 08:37:10 +0000 (0:00:01.067) 0:04:06.704 **** 2025-02-19 08:37:10.376229 | orchestrator | ok: [testbed-manager] 2025-02-19 08:37:10.428613 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:37:10.469020 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:37:10.504986 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:37:10.595676 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:37:10.596372 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:37:10.597208 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:37:10.597808 | orchestrator | 2025-02-19 08:37:10.598827 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-02-19 08:37:10.599877 | orchestrator | Wednesday 19 February 2025 08:37:10 +0000 (0:00:00.335) 0:04:07.039 **** 2025-02-19 08:37:10.720952 | orchestrator | ok: [testbed-manager] 2025-02-19 08:37:10.763440 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:37:10.801847 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:37:10.842633 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:37:10.927677 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:37:10.928723 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:37:10.931235 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:37:10.933288 | orchestrator | 2025-02-19 08:37:10.935799 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-02-19 08:37:10.936218 | orchestrator | Wednesday 19 February 2025 08:37:10 +0000 (0:00:00.331) 0:04:07.371 **** 2025-02-19 08:37:11.058397 | orchestrator | ok: [testbed-manager] 2025-02-19 08:37:11.099535 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:37:11.140284 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:37:11.175124 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:37:11.252089 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:37:11.252767 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:37:11.253784 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:37:11.254651 | orchestrator | 2025-02-19 08:37:11.255880 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-02-19 08:37:11.256302 | orchestrator | Wednesday 19 February 2025 08:37:11 +0000 (0:00:00.326) 0:04:07.697 **** 2025-02-19 08:37:16.674283 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:37:16.674814 | orchestrator | ok: [testbed-manager] 2025-02-19 08:37:16.675232 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:37:16.676728 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:37:16.677068 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:37:16.678272 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:37:16.678987 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:37:16.679015 | orchestrator | 2025-02-19 08:37:16.679335 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-02-19 08:37:16.679793 | orchestrator | Wednesday 19 February 2025 08:37:16 +0000 (0:00:05.423) 0:04:13.120 **** 2025-02-19 08:37:17.158215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:37:17.159083 | orchestrator | 2025-02-19 08:37:17.159239 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-02-19 08:37:17.160349 | orchestrator | Wednesday 19 February 2025 08:37:17 +0000 (0:00:00.482) 0:04:13.603 **** 2025-02-19 08:37:17.260379 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-02-19 08:37:17.262108 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-02-19 08:37:17.262131 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-02-19 08:37:17.262757 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-02-19 08:37:17.313084 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:37:17.313348 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-02-19 08:37:17.314233 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-02-19 08:37:17.361454 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:37:17.361956 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-02-19 08:37:17.427662 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-02-19 08:37:17.428021 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:37:17.428064 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-02-19 08:37:17.428921 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-02-19 08:37:17.480284 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:37:17.481120 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-02-19 08:37:17.481163 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-02-19 08:37:17.582406 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:37:17.583420 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:37:17.585081 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-02-19 08:37:17.586671 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-02-19 08:37:17.586830 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:37:17.587427 | orchestrator | 2025-02-19 08:37:17.588978 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-02-19 08:37:17.589507 | orchestrator | Wednesday 19 February 2025 08:37:17 +0000 (0:00:00.423) 0:04:14.027 **** 2025-02-19 08:37:18.052797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:37:18.054124 | orchestrator | 2025-02-19 08:37:18.055750 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-02-19 08:37:18.058561 | orchestrator | Wednesday 19 February 2025 08:37:18 +0000 (0:00:00.470) 0:04:14.497 **** 2025-02-19 08:37:18.131405 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-02-19 08:37:18.131815 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-02-19 08:37:18.171539 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:37:18.232596 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:37:18.233129 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-02-19 08:37:18.234074 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-02-19 08:37:18.272941 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:37:18.316272 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:37:18.420656 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-02-19 08:37:18.420804 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:37:18.420870 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-02-19 08:37:18.421881 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:37:18.422081 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-02-19 08:37:18.422114 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:37:18.422513 | orchestrator | 2025-02-19 08:37:18.423234 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-02-19 08:37:18.423343 | orchestrator | Wednesday 19 February 2025 08:37:18 +0000 (0:00:00.369) 0:04:14.866 **** 2025-02-19 08:37:19.088522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:37:19.088860 | orchestrator | 2025-02-19 08:37:19.088900 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-02-19 08:37:19.089028 | orchestrator | Wednesday 19 February 2025 08:37:19 +0000 (0:00:00.668) 0:04:15.534 **** 2025-02-19 08:37:53.389551 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:37:53.390180 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:37:53.390215 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:37:53.390234 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:37:53.390251 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:37:53.390266 | orchestrator | changed: [testbed-manager] 2025-02-19 08:37:53.390288 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:37:53.392458 | orchestrator | 2025-02-19 08:37:53.393006 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-02-19 08:38:01.255359 | orchestrator | Wednesday 19 February 2025 08:37:53 +0000 (0:00:34.296) 0:04:49.830 **** 2025-02-19 08:38:01.255518 | orchestrator | changed: [testbed-manager] 2025-02-19 08:38:01.256773 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:01.257012 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:01.260992 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:01.262679 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:01.262784 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:01.263676 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:01.264087 | orchestrator | 2025-02-19 08:38:01.264492 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-02-19 08:38:01.265251 | orchestrator | Wednesday 19 February 2025 08:38:01 +0000 (0:00:07.867) 0:04:57.698 **** 2025-02-19 08:38:08.641830 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:08.642122 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:08.644348 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:08.644674 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:08.645771 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:08.647668 | orchestrator | changed: [testbed-manager] 2025-02-19 08:38:08.647890 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:08.650272 | orchestrator | 2025-02-19 08:38:08.650353 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-02-19 08:38:08.651857 | orchestrator | Wednesday 19 February 2025 08:38:08 +0000 (0:00:07.387) 0:05:05.085 **** 2025-02-19 08:38:10.260889 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:10.261043 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:38:10.261070 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:38:10.261444 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:38:10.261970 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:38:10.262701 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:38:10.263123 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:38:10.263824 | orchestrator | 2025-02-19 08:38:10.264193 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-02-19 08:38:10.264986 | orchestrator | Wednesday 19 February 2025 08:38:10 +0000 (0:00:01.619) 0:05:06.704 **** 2025-02-19 08:38:15.912669 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:15.913007 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:15.914818 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:15.915660 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:15.915938 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:15.917139 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:15.917672 | orchestrator | changed: [testbed-manager] 2025-02-19 08:38:15.917700 | orchestrator | 2025-02-19 08:38:15.917725 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-02-19 08:38:15.918147 | orchestrator | Wednesday 19 February 2025 08:38:15 +0000 (0:00:05.650) 0:05:12.354 **** 2025-02-19 08:38:16.389064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:38:16.389297 | orchestrator | 2025-02-19 08:38:16.389872 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-02-19 08:38:16.390368 | orchestrator | Wednesday 19 February 2025 08:38:16 +0000 (0:00:00.478) 0:05:12.833 **** 2025-02-19 08:38:17.141239 | orchestrator | changed: [testbed-manager] 2025-02-19 08:38:17.141405 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:17.141426 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:17.141438 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:17.141456 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:17.141788 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:17.141836 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:17.143086 | orchestrator | 2025-02-19 08:38:17.143752 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-02-19 08:38:17.143799 | orchestrator | Wednesday 19 February 2025 08:38:17 +0000 (0:00:00.751) 0:05:13.584 **** 2025-02-19 08:38:18.880924 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:18.883574 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:38:18.883686 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:38:18.883708 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:38:18.884948 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:38:18.885918 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:38:18.886703 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:38:18.887199 | orchestrator | 2025-02-19 08:38:18.888076 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-02-19 08:38:18.888299 | orchestrator | Wednesday 19 February 2025 08:38:18 +0000 (0:00:01.739) 0:05:15.324 **** 2025-02-19 08:38:19.732331 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:19.732544 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:19.732901 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:19.734168 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:19.734927 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:19.735014 | orchestrator | changed: [testbed-manager] 2025-02-19 08:38:19.736033 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:19.736776 | orchestrator | 2025-02-19 08:38:19.737004 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-02-19 08:38:19.738011 | orchestrator | Wednesday 19 February 2025 08:38:19 +0000 (0:00:00.850) 0:05:16.175 **** 2025-02-19 08:38:19.861158 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:38:19.917667 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:38:19.954741 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:38:20.007643 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:38:20.087404 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:38:20.089836 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:38:20.090426 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:38:20.091649 | orchestrator | 2025-02-19 08:38:20.091755 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-02-19 08:38:20.091824 | orchestrator | Wednesday 19 February 2025 08:38:20 +0000 (0:00:00.351) 0:05:16.527 **** 2025-02-19 08:38:20.163137 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:38:20.199910 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:38:20.238303 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:38:20.280653 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:38:20.314398 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:38:20.528862 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:38:20.530702 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:38:20.530742 | orchestrator | 2025-02-19 08:38:20.667156 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-02-19 08:38:20.667271 | orchestrator | Wednesday 19 February 2025 08:38:20 +0000 (0:00:00.446) 0:05:16.973 **** 2025-02-19 08:38:20.667302 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:20.709038 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:38:20.749465 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:38:20.783172 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:38:20.877365 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:38:20.877989 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:38:20.878823 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:38:20.879288 | orchestrator | 2025-02-19 08:38:20.880442 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-02-19 08:38:20.880964 | orchestrator | Wednesday 19 February 2025 08:38:20 +0000 (0:00:00.349) 0:05:17.323 **** 2025-02-19 08:38:20.974005 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:38:21.016382 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:38:21.055798 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:38:21.098248 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:38:21.139837 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:38:21.208839 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:38:21.209017 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:38:21.210981 | orchestrator | 2025-02-19 08:38:21.211461 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-02-19 08:38:21.212197 | orchestrator | Wednesday 19 February 2025 08:38:21 +0000 (0:00:00.330) 0:05:17.654 **** 2025-02-19 08:38:21.317476 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:21.357205 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:38:21.414733 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:38:21.587616 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:38:21.689275 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:38:21.771350 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:38:21.771474 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:38:21.771493 | orchestrator | 2025-02-19 08:38:21.771510 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-02-19 08:38:21.771544 | orchestrator | Wednesday 19 February 2025 08:38:21 +0000 (0:00:00.476) 0:05:18.131 **** 2025-02-19 08:38:21.771653 | orchestrator | ok: [testbed-manager] =>  2025-02-19 08:38:21.816143 | orchestrator |  docker_version: 5:27.4.1 2025-02-19 08:38:21.816309 | orchestrator | ok: [testbed-node-3] =>  2025-02-19 08:38:21.817788 | orchestrator |  docker_version: 5:27.4.1 2025-02-19 08:38:21.865971 | orchestrator | ok: [testbed-node-4] =>  2025-02-19 08:38:21.868465 | orchestrator |  docker_version: 5:27.4.1 2025-02-19 08:38:21.906754 | orchestrator | ok: [testbed-node-5] =>  2025-02-19 08:38:22.028177 | orchestrator |  docker_version: 5:27.4.1 2025-02-19 08:38:22.028311 | orchestrator | ok: [testbed-node-0] =>  2025-02-19 08:38:22.028421 | orchestrator |  docker_version: 5:27.4.1 2025-02-19 08:38:22.028891 | orchestrator | ok: [testbed-node-1] =>  2025-02-19 08:38:22.029909 | orchestrator |  docker_version: 5:27.4.1 2025-02-19 08:38:22.030427 | orchestrator | ok: [testbed-node-2] =>  2025-02-19 08:38:22.030794 | orchestrator |  docker_version: 5:27.4.1 2025-02-19 08:38:22.030825 | orchestrator | 2025-02-19 08:38:22.031029 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-02-19 08:38:22.032266 | orchestrator | Wednesday 19 February 2025 08:38:22 +0000 (0:00:00.341) 0:05:18.472 **** 2025-02-19 08:38:22.147502 | orchestrator | ok: [testbed-manager] =>  2025-02-19 08:38:22.226573 | orchestrator |  docker_cli_version: 5:27.4.1 2025-02-19 08:38:22.226746 | orchestrator | ok: [testbed-node-3] =>  2025-02-19 08:38:22.226854 | orchestrator |  docker_cli_version: 5:27.4.1 2025-02-19 08:38:22.268626 | orchestrator | ok: [testbed-node-4] =>  2025-02-19 08:38:22.322090 | orchestrator |  docker_cli_version: 5:27.4.1 2025-02-19 08:38:22.322207 | orchestrator | ok: [testbed-node-5] =>  2025-02-19 08:38:22.411916 | orchestrator |  docker_cli_version: 5:27.4.1 2025-02-19 08:38:22.412050 | orchestrator | ok: [testbed-node-0] =>  2025-02-19 08:38:22.412727 | orchestrator |  docker_cli_version: 5:27.4.1 2025-02-19 08:38:22.414102 | orchestrator | ok: [testbed-node-1] =>  2025-02-19 08:38:22.418248 | orchestrator |  docker_cli_version: 5:27.4.1 2025-02-19 08:38:22.418417 | orchestrator | ok: [testbed-node-2] =>  2025-02-19 08:38:22.418442 | orchestrator |  docker_cli_version: 5:27.4.1 2025-02-19 08:38:22.418459 | orchestrator | 2025-02-19 08:38:22.418475 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-02-19 08:38:22.418498 | orchestrator | Wednesday 19 February 2025 08:38:22 +0000 (0:00:00.384) 0:05:18.857 **** 2025-02-19 08:38:22.501356 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:38:22.546426 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:38:22.582267 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:38:22.620005 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:38:22.657560 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:38:22.737765 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:38:22.737999 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:38:22.738131 | orchestrator | 2025-02-19 08:38:22.738163 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-02-19 08:38:22.738760 | orchestrator | Wednesday 19 February 2025 08:38:22 +0000 (0:00:00.326) 0:05:19.184 **** 2025-02-19 08:38:22.852702 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:38:22.881912 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:38:22.968487 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:38:23.009656 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:38:23.080938 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:38:23.081685 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:38:23.084392 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:38:23.086143 | orchestrator | 2025-02-19 08:38:23.540477 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-02-19 08:38:23.540573 | orchestrator | Wednesday 19 February 2025 08:38:23 +0000 (0:00:00.342) 0:05:19.526 **** 2025-02-19 08:38:23.540621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:38:23.540866 | orchestrator | 2025-02-19 08:38:23.542107 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-02-19 08:38:23.542283 | orchestrator | Wednesday 19 February 2025 08:38:23 +0000 (0:00:00.459) 0:05:19.986 **** 2025-02-19 08:38:24.601857 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:24.602093 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:38:24.602128 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:38:24.602151 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:38:24.602210 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:38:24.602281 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:38:24.602616 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:38:24.602959 | orchestrator | 2025-02-19 08:38:24.603033 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-02-19 08:38:24.605179 | orchestrator | Wednesday 19 February 2025 08:38:24 +0000 (0:00:01.057) 0:05:21.043 **** 2025-02-19 08:38:27.602359 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:38:27.602473 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:38:27.603755 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:38:27.605113 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:38:27.606608 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:38:27.607256 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:38:27.608035 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:27.608932 | orchestrator | 2025-02-19 08:38:27.610377 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-02-19 08:38:27.611042 | orchestrator | Wednesday 19 February 2025 08:38:27 +0000 (0:00:03.002) 0:05:24.046 **** 2025-02-19 08:38:27.679437 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-02-19 08:38:27.925104 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-02-19 08:38:27.926013 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-02-19 08:38:27.927628 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-02-19 08:38:27.930226 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-02-19 08:38:28.001034 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-02-19 08:38:28.001164 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:38:28.002513 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-02-19 08:38:28.002565 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-02-19 08:38:28.115810 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:38:28.115986 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-02-19 08:38:28.116841 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-02-19 08:38:28.117463 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-02-19 08:38:28.118162 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-02-19 08:38:28.208758 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:38:28.209241 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-02-19 08:38:28.212816 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-02-19 08:38:28.285048 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-02-19 08:38:28.286546 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:38:28.439940 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-02-19 08:38:28.440063 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-02-19 08:38:28.440100 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:38:28.440550 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-02-19 08:38:28.441218 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:38:28.442506 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-02-19 08:38:28.443159 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-02-19 08:38:28.444208 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-02-19 08:38:28.445242 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:38:28.445378 | orchestrator | 2025-02-19 08:38:28.446102 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-02-19 08:38:28.446852 | orchestrator | Wednesday 19 February 2025 08:38:28 +0000 (0:00:00.837) 0:05:24.884 **** 2025-02-19 08:38:34.645801 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:34.645974 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:34.646751 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:34.649025 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:34.649249 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:34.650489 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:34.651958 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:34.652632 | orchestrator | 2025-02-19 08:38:34.654150 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-02-19 08:38:34.654790 | orchestrator | Wednesday 19 February 2025 08:38:34 +0000 (0:00:06.205) 0:05:31.089 **** 2025-02-19 08:38:35.841751 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:35.841932 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:35.842481 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:35.843452 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:35.844092 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:35.845190 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:35.845513 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:35.845977 | orchestrator | 2025-02-19 08:38:35.846598 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-02-19 08:38:35.847321 | orchestrator | Wednesday 19 February 2025 08:38:35 +0000 (0:00:01.194) 0:05:32.284 **** 2025-02-19 08:38:43.101085 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:43.101252 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:43.101268 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:43.103539 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:43.104832 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:43.104887 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:43.108005 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:43.109049 | orchestrator | 2025-02-19 08:38:43.110196 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-02-19 08:38:43.111163 | orchestrator | Wednesday 19 February 2025 08:38:43 +0000 (0:00:07.258) 0:05:39.542 **** 2025-02-19 08:38:46.321073 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:46.321179 | orchestrator | changed: [testbed-manager] 2025-02-19 08:38:46.321288 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:46.322764 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:46.325508 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:46.330920 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:47.743095 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:47.743217 | orchestrator | 2025-02-19 08:38:47.743238 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-02-19 08:38:47.743255 | orchestrator | Wednesday 19 February 2025 08:38:46 +0000 (0:00:03.222) 0:05:42.764 **** 2025-02-19 08:38:47.743286 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:47.745274 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:47.745308 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:47.745374 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:47.746464 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:47.747479 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:47.748576 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:47.749416 | orchestrator | 2025-02-19 08:38:47.750622 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-02-19 08:38:47.751123 | orchestrator | Wednesday 19 February 2025 08:38:47 +0000 (0:00:01.420) 0:05:44.185 **** 2025-02-19 08:38:49.170757 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:49.170949 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:49.171666 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:49.173248 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:49.173543 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:49.173952 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:49.174643 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:49.175664 | orchestrator | 2025-02-19 08:38:49.175793 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-02-19 08:38:49.176206 | orchestrator | Wednesday 19 February 2025 08:38:49 +0000 (0:00:01.428) 0:05:45.613 **** 2025-02-19 08:38:49.387469 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:38:49.458660 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:38:49.526337 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:38:49.595869 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:38:49.804964 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:38:49.807235 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:38:49.812313 | orchestrator | changed: [testbed-manager] 2025-02-19 08:38:49.814078 | orchestrator | 2025-02-19 08:38:49.818274 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-02-19 08:38:49.819369 | orchestrator | Wednesday 19 February 2025 08:38:49 +0000 (0:00:00.637) 0:05:46.251 **** 2025-02-19 08:38:59.481062 | orchestrator | ok: [testbed-manager] 2025-02-19 08:38:59.481434 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:38:59.481475 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:38:59.481499 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:38:59.484087 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:38:59.484856 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:38:59.485983 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:38:59.486169 | orchestrator | 2025-02-19 08:38:59.487143 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-02-19 08:38:59.487961 | orchestrator | Wednesday 19 February 2025 08:38:59 +0000 (0:00:09.669) 0:05:55.920 **** 2025-02-19 08:39:00.175138 | orchestrator | changed: [testbed-manager] 2025-02-19 08:39:00.699700 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:00.703392 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:00.703434 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:00.703457 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:00.704842 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:00.705231 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:00.705611 | orchestrator | 2025-02-19 08:39:00.706128 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-02-19 08:39:00.706513 | orchestrator | Wednesday 19 February 2025 08:39:00 +0000 (0:00:01.217) 0:05:57.137 **** 2025-02-19 08:39:10.169148 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:10.170438 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:10.171159 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:10.172948 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:10.174954 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:10.175818 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:10.176557 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:10.177315 | orchestrator | 2025-02-19 08:39:10.177665 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-02-19 08:39:10.177997 | orchestrator | Wednesday 19 February 2025 08:39:10 +0000 (0:00:09.475) 0:06:06.613 **** 2025-02-19 08:39:21.501131 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:21.501353 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:21.501381 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:21.501398 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:21.501421 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:21.501665 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:21.502280 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:21.502782 | orchestrator | 2025-02-19 08:39:21.503081 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-02-19 08:39:21.503678 | orchestrator | Wednesday 19 February 2025 08:39:21 +0000 (0:00:11.327) 0:06:17.940 **** 2025-02-19 08:39:21.857100 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-02-19 08:39:22.788127 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-02-19 08:39:22.788552 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-02-19 08:39:22.788658 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-02-19 08:39:22.791793 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-02-19 08:39:22.792461 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-02-19 08:39:22.792767 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-02-19 08:39:22.794166 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-02-19 08:39:22.795088 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-02-19 08:39:22.795132 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-02-19 08:39:22.795504 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-02-19 08:39:22.796592 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-02-19 08:39:22.797325 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-02-19 08:39:22.797999 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-02-19 08:39:22.798904 | orchestrator | 2025-02-19 08:39:22.799664 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-02-19 08:39:22.800103 | orchestrator | Wednesday 19 February 2025 08:39:22 +0000 (0:00:01.289) 0:06:19.230 **** 2025-02-19 08:39:22.931200 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:39:23.000821 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:39:23.090341 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:39:23.165767 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:39:23.237099 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:39:23.363129 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:39:23.363804 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:39:23.364686 | orchestrator | 2025-02-19 08:39:23.366107 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-02-19 08:39:23.366968 | orchestrator | Wednesday 19 February 2025 08:39:23 +0000 (0:00:00.578) 0:06:19.808 **** 2025-02-19 08:39:27.524548 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:27.524835 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:27.524864 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:27.525426 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:27.526887 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:27.527749 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:27.528414 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:27.528664 | orchestrator | 2025-02-19 08:39:27.529046 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-02-19 08:39:27.529310 | orchestrator | Wednesday 19 February 2025 08:39:27 +0000 (0:00:04.157) 0:06:23.966 **** 2025-02-19 08:39:27.670193 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:39:27.738430 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:39:27.807648 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:39:27.884192 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:39:27.955144 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:39:28.055020 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:39:28.055423 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:39:28.056258 | orchestrator | 2025-02-19 08:39:28.057109 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-02-19 08:39:28.061010 | orchestrator | Wednesday 19 February 2025 08:39:28 +0000 (0:00:00.532) 0:06:24.499 **** 2025-02-19 08:39:28.138862 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-02-19 08:39:28.139326 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-02-19 08:39:28.211423 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:39:28.212189 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-02-19 08:39:28.212985 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-02-19 08:39:28.287116 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:39:28.288133 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-02-19 08:39:28.288932 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-02-19 08:39:28.374931 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:39:28.375515 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-02-19 08:39:28.375708 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-02-19 08:39:28.444437 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:39:28.445680 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-02-19 08:39:28.446202 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-02-19 08:39:28.520837 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:39:28.521025 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-02-19 08:39:28.522568 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-02-19 08:39:28.647887 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:39:28.648087 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-02-19 08:39:28.649063 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-02-19 08:39:28.649231 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:39:28.650335 | orchestrator | 2025-02-19 08:39:28.653851 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-02-19 08:39:28.654315 | orchestrator | Wednesday 19 February 2025 08:39:28 +0000 (0:00:00.595) 0:06:25.094 **** 2025-02-19 08:39:28.799682 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:39:28.886827 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:39:28.955804 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:39:29.042152 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:39:29.133242 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:39:29.247045 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:39:29.247717 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:39:29.248338 | orchestrator | 2025-02-19 08:39:29.249133 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-02-19 08:39:29.249820 | orchestrator | Wednesday 19 February 2025 08:39:29 +0000 (0:00:00.597) 0:06:25.691 **** 2025-02-19 08:39:29.415512 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:39:29.492703 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:39:29.570200 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:39:29.653619 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:39:29.729887 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:39:29.839184 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:39:29.842940 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:39:29.843032 | orchestrator | 2025-02-19 08:39:29.843119 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-02-19 08:39:29.843658 | orchestrator | Wednesday 19 February 2025 08:39:29 +0000 (0:00:00.592) 0:06:26.284 **** 2025-02-19 08:39:30.208109 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:39:30.275403 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:39:30.356132 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:39:30.444979 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:39:30.525696 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:39:30.663091 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:39:30.663375 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:39:30.664575 | orchestrator | 2025-02-19 08:39:30.664681 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-02-19 08:39:30.665202 | orchestrator | Wednesday 19 February 2025 08:39:30 +0000 (0:00:00.821) 0:06:27.105 **** 2025-02-19 08:39:32.535096 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:32.535287 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:39:32.537016 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:39:32.538199 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:39:32.538738 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:39:32.539085 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:39:32.540267 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:39:32.540709 | orchestrator | 2025-02-19 08:39:32.541249 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-02-19 08:39:32.541578 | orchestrator | Wednesday 19 February 2025 08:39:32 +0000 (0:00:01.874) 0:06:28.979 **** 2025-02-19 08:39:33.491076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:39:33.491323 | orchestrator | 2025-02-19 08:39:33.491813 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-02-19 08:39:33.496125 | orchestrator | Wednesday 19 February 2025 08:39:33 +0000 (0:00:00.953) 0:06:29.933 **** 2025-02-19 08:39:34.150261 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:34.552165 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:34.552406 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:34.552441 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:34.553509 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:34.554839 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:34.555315 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:34.555348 | orchestrator | 2025-02-19 08:39:34.556184 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-02-19 08:39:34.556995 | orchestrator | Wednesday 19 February 2025 08:39:34 +0000 (0:00:01.062) 0:06:30.995 **** 2025-02-19 08:39:35.451860 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:35.453276 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:35.454137 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:35.456295 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:35.457970 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:35.459056 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:35.459505 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:35.460875 | orchestrator | 2025-02-19 08:39:35.461953 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-02-19 08:39:35.462984 | orchestrator | Wednesday 19 February 2025 08:39:35 +0000 (0:00:00.897) 0:06:31.893 **** 2025-02-19 08:39:36.833115 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:36.834209 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:36.834294 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:36.838397 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:36.841318 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:36.841651 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:36.842344 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:36.843017 | orchestrator | 2025-02-19 08:39:36.845328 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-02-19 08:39:36.845653 | orchestrator | Wednesday 19 February 2025 08:39:36 +0000 (0:00:01.384) 0:06:33.277 **** 2025-02-19 08:39:36.977366 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:39:38.340355 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:39:38.340539 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:39:38.341047 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:39:38.341458 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:39:38.342239 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:39:38.343134 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:39:38.344076 | orchestrator | 2025-02-19 08:39:38.344154 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-02-19 08:39:38.344754 | orchestrator | Wednesday 19 February 2025 08:39:38 +0000 (0:00:01.504) 0:06:34.782 **** 2025-02-19 08:39:39.816355 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:39.816870 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:39.817845 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:39.819165 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:39.820892 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:39.821265 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:39.822268 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:39.822728 | orchestrator | 2025-02-19 08:39:39.823250 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-02-19 08:39:39.824628 | orchestrator | Wednesday 19 February 2025 08:39:39 +0000 (0:00:01.476) 0:06:36.259 **** 2025-02-19 08:39:41.536164 | orchestrator | changed: [testbed-manager] 2025-02-19 08:39:41.537450 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:41.538639 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:41.539790 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:41.540513 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:41.541351 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:41.542289 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:41.543313 | orchestrator | 2025-02-19 08:39:41.544139 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-02-19 08:39:41.544832 | orchestrator | Wednesday 19 February 2025 08:39:41 +0000 (0:00:01.719) 0:06:37.978 **** 2025-02-19 08:39:42.466009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:39:42.466446 | orchestrator | 2025-02-19 08:39:42.466485 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-02-19 08:39:42.466509 | orchestrator | Wednesday 19 February 2025 08:39:42 +0000 (0:00:00.924) 0:06:38.903 **** 2025-02-19 08:39:43.937967 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:39:43.938515 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:43.938799 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:39:43.940626 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:39:43.941138 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:39:43.943497 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:39:43.943671 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:39:43.948088 | orchestrator | 2025-02-19 08:39:43.948988 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-02-19 08:39:43.949160 | orchestrator | Wednesday 19 February 2025 08:39:43 +0000 (0:00:01.479) 0:06:40.382 **** 2025-02-19 08:39:45.075626 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:45.076411 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:39:45.076676 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:39:45.078216 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:39:45.078892 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:39:45.079785 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:39:45.080989 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:39:45.081296 | orchestrator | 2025-02-19 08:39:45.082228 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-02-19 08:39:45.082803 | orchestrator | Wednesday 19 February 2025 08:39:45 +0000 (0:00:01.135) 0:06:41.517 **** 2025-02-19 08:39:46.556517 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:46.556773 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:39:46.559130 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:39:46.560412 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:39:46.560501 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:39:46.560557 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:39:46.560653 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:39:46.561639 | orchestrator | 2025-02-19 08:39:46.562469 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-02-19 08:39:46.563241 | orchestrator | Wednesday 19 February 2025 08:39:46 +0000 (0:00:01.480) 0:06:42.997 **** 2025-02-19 08:39:47.851801 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:47.851967 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:39:47.852943 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:39:47.856123 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:39:47.856305 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:39:47.856330 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:39:47.856344 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:39:47.856358 | orchestrator | 2025-02-19 08:39:47.856373 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-02-19 08:39:47.856394 | orchestrator | Wednesday 19 February 2025 08:39:47 +0000 (0:00:01.296) 0:06:44.293 **** 2025-02-19 08:39:49.342555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:39:49.343095 | orchestrator | 2025-02-19 08:39:49.344288 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-19 08:39:49.345868 | orchestrator | Wednesday 19 February 2025 08:39:48 +0000 (0:00:00.996) 0:06:45.290 **** 2025-02-19 08:39:49.347196 | orchestrator | 2025-02-19 08:39:49.349545 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-19 08:39:49.350095 | orchestrator | Wednesday 19 February 2025 08:39:48 +0000 (0:00:00.048) 0:06:45.338 **** 2025-02-19 08:39:49.351364 | orchestrator | 2025-02-19 08:39:49.352481 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-19 08:39:49.353179 | orchestrator | Wednesday 19 February 2025 08:39:48 +0000 (0:00:00.048) 0:06:45.386 **** 2025-02-19 08:39:49.354262 | orchestrator | 2025-02-19 08:39:49.354498 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-19 08:39:49.355380 | orchestrator | Wednesday 19 February 2025 08:39:48 +0000 (0:00:00.039) 0:06:45.426 **** 2025-02-19 08:39:49.356345 | orchestrator | 2025-02-19 08:39:49.357195 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-19 08:39:49.358220 | orchestrator | Wednesday 19 February 2025 08:39:49 +0000 (0:00:00.040) 0:06:45.466 **** 2025-02-19 08:39:49.358758 | orchestrator | 2025-02-19 08:39:49.359025 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-19 08:39:49.359712 | orchestrator | Wednesday 19 February 2025 08:39:49 +0000 (0:00:00.237) 0:06:45.704 **** 2025-02-19 08:39:49.360796 | orchestrator | 2025-02-19 08:39:49.361209 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-19 08:39:49.361996 | orchestrator | Wednesday 19 February 2025 08:39:49 +0000 (0:00:00.040) 0:06:45.745 **** 2025-02-19 08:39:49.362169 | orchestrator | 2025-02-19 08:39:49.362555 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-19 08:39:49.362930 | orchestrator | Wednesday 19 February 2025 08:39:49 +0000 (0:00:00.041) 0:06:45.786 **** 2025-02-19 08:39:50.625785 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:39:50.626173 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:39:50.628154 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:39:50.630983 | orchestrator | 2025-02-19 08:39:50.631637 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-02-19 08:39:50.632270 | orchestrator | Wednesday 19 February 2025 08:39:50 +0000 (0:00:01.282) 0:06:47.069 **** 2025-02-19 08:39:52.123743 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:52.124827 | orchestrator | changed: [testbed-manager] 2025-02-19 08:39:52.125770 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:52.125815 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:52.127034 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:52.127077 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:52.127092 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:52.127106 | orchestrator | 2025-02-19 08:39:52.127128 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-02-19 08:39:52.127757 | orchestrator | Wednesday 19 February 2025 08:39:52 +0000 (0:00:01.499) 0:06:48.568 **** 2025-02-19 08:39:53.384848 | orchestrator | changed: [testbed-manager] 2025-02-19 08:39:53.385460 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:53.385520 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:53.386863 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:53.387802 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:53.388514 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:53.388761 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:53.389358 | orchestrator | 2025-02-19 08:39:53.389929 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-02-19 08:39:53.390706 | orchestrator | Wednesday 19 February 2025 08:39:53 +0000 (0:00:01.257) 0:06:49.826 **** 2025-02-19 08:39:53.529052 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:39:55.674313 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:55.676672 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:55.676725 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:55.678458 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:55.678491 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:55.678507 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:55.678529 | orchestrator | 2025-02-19 08:39:55.679025 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-02-19 08:39:55.679811 | orchestrator | Wednesday 19 February 2025 08:39:55 +0000 (0:00:02.290) 0:06:52.117 **** 2025-02-19 08:39:55.782906 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:39:55.783410 | orchestrator | 2025-02-19 08:39:55.783474 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-02-19 08:39:55.783753 | orchestrator | Wednesday 19 February 2025 08:39:55 +0000 (0:00:00.109) 0:06:52.226 **** 2025-02-19 08:39:57.158459 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:57.158860 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:39:57.159017 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:39:57.159948 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:39:57.163023 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:39:57.163316 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:39:57.163348 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:39:57.163724 | orchestrator | 2025-02-19 08:39:57.165416 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-02-19 08:39:57.320755 | orchestrator | Wednesday 19 February 2025 08:39:57 +0000 (0:00:01.374) 0:06:53.601 **** 2025-02-19 08:39:57.320880 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:39:57.399088 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:39:57.504380 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:39:57.577491 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:39:57.654464 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:39:57.798193 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:39:57.798685 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:39:57.800073 | orchestrator | 2025-02-19 08:39:57.800783 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-02-19 08:39:57.801383 | orchestrator | Wednesday 19 February 2025 08:39:57 +0000 (0:00:00.640) 0:06:54.241 **** 2025-02-19 08:39:58.784692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:39:58.785980 | orchestrator | 2025-02-19 08:39:58.787316 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-02-19 08:39:58.787723 | orchestrator | Wednesday 19 February 2025 08:39:58 +0000 (0:00:00.984) 0:06:55.226 **** 2025-02-19 08:39:59.233146 | orchestrator | ok: [testbed-manager] 2025-02-19 08:39:59.680639 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:39:59.680817 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:39:59.681866 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:39:59.681906 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:39:59.682735 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:39:59.683201 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:39:59.683353 | orchestrator | 2025-02-19 08:39:59.683714 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-02-19 08:39:59.684071 | orchestrator | Wednesday 19 February 2025 08:39:59 +0000 (0:00:00.901) 0:06:56.127 **** 2025-02-19 08:40:02.734703 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-02-19 08:40:02.735107 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-02-19 08:40:02.738491 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-02-19 08:40:02.739254 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-02-19 08:40:02.739373 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-02-19 08:40:02.751020 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-02-19 08:40:02.751222 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-02-19 08:40:02.753029 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-02-19 08:40:02.753213 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-02-19 08:40:02.753366 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-02-19 08:40:02.753934 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-02-19 08:40:02.755152 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-02-19 08:40:02.756466 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-02-19 08:40:02.757274 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-02-19 08:40:02.757906 | orchestrator | 2025-02-19 08:40:02.758070 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-02-19 08:40:02.758171 | orchestrator | Wednesday 19 February 2025 08:40:02 +0000 (0:00:03.049) 0:06:59.176 **** 2025-02-19 08:40:02.888412 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:40:02.959362 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:40:03.036823 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:40:03.105259 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:40:03.174489 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:40:03.290510 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:40:03.291402 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:40:03.292787 | orchestrator | 2025-02-19 08:40:03.292985 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-02-19 08:40:03.293707 | orchestrator | Wednesday 19 February 2025 08:40:03 +0000 (0:00:00.559) 0:06:59.735 **** 2025-02-19 08:40:04.237863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:40:04.239696 | orchestrator | 2025-02-19 08:40:04.759890 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-02-19 08:40:04.760898 | orchestrator | Wednesday 19 February 2025 08:40:04 +0000 (0:00:00.944) 0:07:00.680 **** 2025-02-19 08:40:04.761006 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:05.353440 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:05.354457 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:05.354946 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:05.355103 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:05.355431 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:05.356211 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:05.357963 | orchestrator | 2025-02-19 08:40:05.359093 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-02-19 08:40:05.359135 | orchestrator | Wednesday 19 February 2025 08:40:05 +0000 (0:00:01.118) 0:07:01.799 **** 2025-02-19 08:40:05.823379 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:06.260440 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:06.261148 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:06.261202 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:06.262269 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:06.262981 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:06.263719 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:06.264364 | orchestrator | 2025-02-19 08:40:06.265148 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-02-19 08:40:06.265611 | orchestrator | Wednesday 19 February 2025 08:40:06 +0000 (0:00:00.903) 0:07:02.703 **** 2025-02-19 08:40:06.393381 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:40:06.454690 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:40:06.517739 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:40:06.586203 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:40:06.648490 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:40:06.766441 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:40:06.768416 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:40:06.771290 | orchestrator | 2025-02-19 08:40:06.771376 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-02-19 08:40:06.771419 | orchestrator | Wednesday 19 February 2025 08:40:06 +0000 (0:00:00.499) 0:07:03.202 **** 2025-02-19 08:40:08.329238 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:08.329664 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:08.330071 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:08.330957 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:08.331864 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:08.337067 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:08.337648 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:08.337666 | orchestrator | 2025-02-19 08:40:08.338759 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-02-19 08:40:08.340091 | orchestrator | Wednesday 19 February 2025 08:40:08 +0000 (0:00:01.571) 0:07:04.774 **** 2025-02-19 08:40:08.467781 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:40:08.543760 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:40:08.611481 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:40:08.678294 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:40:08.936218 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:40:09.057333 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:40:09.063802 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:40:09.063924 | orchestrator | 2025-02-19 08:40:09.063945 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-02-19 08:40:09.063964 | orchestrator | Wednesday 19 February 2025 08:40:09 +0000 (0:00:00.723) 0:07:05.497 **** 2025-02-19 08:40:16.796620 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:16.796812 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:40:16.796845 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:40:16.797454 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:40:16.798997 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:40:16.799744 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:40:16.800135 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:40:16.800942 | orchestrator | 2025-02-19 08:40:16.801971 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-02-19 08:40:16.802492 | orchestrator | Wednesday 19 February 2025 08:40:16 +0000 (0:00:07.740) 0:07:13.237 **** 2025-02-19 08:40:18.160771 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:18.160905 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:40:18.162082 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:40:18.162704 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:40:18.163729 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:40:18.164434 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:40:18.164925 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:40:18.165965 | orchestrator | 2025-02-19 08:40:18.166379 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-02-19 08:40:18.167123 | orchestrator | Wednesday 19 February 2025 08:40:18 +0000 (0:00:01.366) 0:07:14.604 **** 2025-02-19 08:40:19.972565 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:19.973275 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:40:19.974300 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:40:19.975360 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:40:19.976402 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:40:19.977128 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:40:19.978524 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:40:19.979329 | orchestrator | 2025-02-19 08:40:19.979954 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-02-19 08:40:19.980720 | orchestrator | Wednesday 19 February 2025 08:40:19 +0000 (0:00:01.810) 0:07:16.415 **** 2025-02-19 08:40:21.933779 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:21.934075 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:40:21.935316 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:40:21.936801 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:40:21.937948 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:40:21.937980 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:40:21.938056 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:40:21.938961 | orchestrator | 2025-02-19 08:40:21.939029 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-19 08:40:21.939976 | orchestrator | Wednesday 19 February 2025 08:40:21 +0000 (0:00:01.961) 0:07:18.376 **** 2025-02-19 08:40:22.445983 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:22.870836 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:22.872126 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:22.872206 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:22.873182 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:22.874143 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:22.876120 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:22.877180 | orchestrator | 2025-02-19 08:40:22.877528 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-19 08:40:22.878511 | orchestrator | Wednesday 19 February 2025 08:40:22 +0000 (0:00:00.937) 0:07:19.314 **** 2025-02-19 08:40:23.037678 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:40:23.107304 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:40:23.177005 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:40:23.254949 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:40:23.323467 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:40:23.762307 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:40:23.764127 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:40:23.764193 | orchestrator | 2025-02-19 08:40:23.765156 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-02-19 08:40:23.765776 | orchestrator | Wednesday 19 February 2025 08:40:23 +0000 (0:00:00.890) 0:07:20.205 **** 2025-02-19 08:40:23.892102 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:40:23.966052 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:40:24.036185 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:40:24.100032 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:40:24.174461 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:40:24.302236 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:40:24.302446 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:40:24.302791 | orchestrator | 2025-02-19 08:40:24.303946 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-02-19 08:40:24.448023 | orchestrator | Wednesday 19 February 2025 08:40:24 +0000 (0:00:00.540) 0:07:20.746 **** 2025-02-19 08:40:24.448169 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:24.708241 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:24.773923 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:24.843814 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:24.918005 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:25.058003 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:25.062749 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:25.063000 | orchestrator | 2025-02-19 08:40:25.064477 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-02-19 08:40:25.065497 | orchestrator | Wednesday 19 February 2025 08:40:25 +0000 (0:00:00.753) 0:07:21.499 **** 2025-02-19 08:40:25.202187 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:25.280158 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:25.349241 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:25.422113 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:25.527695 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:25.655429 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:25.656948 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:25.658646 | orchestrator | 2025-02-19 08:40:25.659769 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-02-19 08:40:25.660894 | orchestrator | Wednesday 19 February 2025 08:40:25 +0000 (0:00:00.597) 0:07:22.097 **** 2025-02-19 08:40:25.813321 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:25.879241 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:25.970761 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:26.042708 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:26.114308 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:26.250001 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:26.251326 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:26.252993 | orchestrator | 2025-02-19 08:40:26.254964 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-02-19 08:40:26.256066 | orchestrator | Wednesday 19 February 2025 08:40:26 +0000 (0:00:00.598) 0:07:22.695 **** 2025-02-19 08:40:31.585815 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:31.586097 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:31.587241 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:31.591103 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:31.591620 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:31.591675 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:31.591699 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:31.591732 | orchestrator | 2025-02-19 08:40:31.592194 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-02-19 08:40:31.593065 | orchestrator | Wednesday 19 February 2025 08:40:31 +0000 (0:00:05.333) 0:07:28.029 **** 2025-02-19 08:40:31.733442 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:40:31.800990 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:40:31.868089 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:40:32.164713 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:40:32.244128 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:40:32.389100 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:40:32.389292 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:40:32.390237 | orchestrator | 2025-02-19 08:40:32.390827 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-02-19 08:40:32.391273 | orchestrator | Wednesday 19 February 2025 08:40:32 +0000 (0:00:00.804) 0:07:28.833 **** 2025-02-19 08:40:33.407356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:40:33.408758 | orchestrator | 2025-02-19 08:40:33.408811 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-02-19 08:40:33.410304 | orchestrator | Wednesday 19 February 2025 08:40:33 +0000 (0:00:01.019) 0:07:29.852 **** 2025-02-19 08:40:35.202662 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:35.203187 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:35.206970 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:35.208074 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:35.208108 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:35.208129 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:35.209119 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:35.209938 | orchestrator | 2025-02-19 08:40:35.210410 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-02-19 08:40:35.211127 | orchestrator | Wednesday 19 February 2025 08:40:35 +0000 (0:00:01.792) 0:07:31.645 **** 2025-02-19 08:40:36.379543 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:36.380293 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:36.380353 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:36.381666 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:36.381910 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:36.383119 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:36.384140 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:36.384983 | orchestrator | 2025-02-19 08:40:36.385346 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-02-19 08:40:36.385991 | orchestrator | Wednesday 19 February 2025 08:40:36 +0000 (0:00:01.178) 0:07:32.824 **** 2025-02-19 08:40:36.947652 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:37.024408 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:37.105844 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:37.547191 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:37.547911 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:37.548835 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:37.550084 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:37.551098 | orchestrator | 2025-02-19 08:40:37.551554 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-02-19 08:40:37.551964 | orchestrator | Wednesday 19 February 2025 08:40:37 +0000 (0:00:01.165) 0:07:33.989 **** 2025-02-19 08:40:39.281872 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-19 08:40:39.282141 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-19 08:40:39.282518 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-19 08:40:39.282929 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-19 08:40:39.283689 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-19 08:40:39.284881 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-19 08:40:39.285042 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-19 08:40:39.285462 | orchestrator | 2025-02-19 08:40:39.285962 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-02-19 08:40:39.286897 | orchestrator | Wednesday 19 February 2025 08:40:39 +0000 (0:00:01.734) 0:07:35.723 **** 2025-02-19 08:40:40.143911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:40:40.144495 | orchestrator | 2025-02-19 08:40:40.145796 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-02-19 08:40:40.147505 | orchestrator | Wednesday 19 February 2025 08:40:40 +0000 (0:00:00.862) 0:07:36.585 **** 2025-02-19 08:40:49.430609 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:40:49.430784 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:40:49.431794 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:40:49.436399 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:40:49.436777 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:40:49.438121 | orchestrator | changed: [testbed-manager] 2025-02-19 08:40:49.439321 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:40:49.439802 | orchestrator | 2025-02-19 08:40:49.440529 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-02-19 08:40:49.441132 | orchestrator | Wednesday 19 February 2025 08:40:49 +0000 (0:00:09.288) 0:07:45.874 **** 2025-02-19 08:40:51.303684 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:51.304665 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:51.304772 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:51.304807 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:51.305178 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:51.305218 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:51.306626 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:51.307293 | orchestrator | 2025-02-19 08:40:51.307877 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-02-19 08:40:51.308355 | orchestrator | Wednesday 19 February 2025 08:40:51 +0000 (0:00:01.863) 0:07:47.738 **** 2025-02-19 08:40:52.916687 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:52.917456 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:52.919831 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:52.920084 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:52.920112 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:52.920132 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:52.921189 | orchestrator | 2025-02-19 08:40:52.921949 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-02-19 08:40:52.922117 | orchestrator | Wednesday 19 February 2025 08:40:52 +0000 (0:00:01.620) 0:07:49.359 **** 2025-02-19 08:40:54.215801 | orchestrator | changed: [testbed-manager] 2025-02-19 08:40:54.216223 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:40:54.216822 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:40:54.220686 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:40:54.220800 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:40:54.220815 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:40:54.220824 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:40:54.220836 | orchestrator | 2025-02-19 08:40:54.220983 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-02-19 08:40:54.221397 | orchestrator | 2025-02-19 08:40:54.221807 | orchestrator | TASK [Include hardening role] ************************************************** 2025-02-19 08:40:54.222160 | orchestrator | Wednesday 19 February 2025 08:40:54 +0000 (0:00:01.301) 0:07:50.660 **** 2025-02-19 08:40:54.347572 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:40:54.412501 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:40:54.481411 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:40:54.543814 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:40:54.616819 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:40:54.737182 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:40:54.738327 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:40:54.738378 | orchestrator | 2025-02-19 08:40:54.742830 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-02-19 08:40:54.744113 | orchestrator | 2025-02-19 08:40:54.746656 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-02-19 08:40:54.750338 | orchestrator | Wednesday 19 February 2025 08:40:54 +0000 (0:00:00.520) 0:07:51.181 **** 2025-02-19 08:40:56.143788 | orchestrator | changed: [testbed-manager] 2025-02-19 08:40:56.144136 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:40:56.145029 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:40:56.146885 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:40:56.149812 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:40:56.151675 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:40:56.152809 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:40:56.153751 | orchestrator | 2025-02-19 08:40:56.154670 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-02-19 08:40:56.155788 | orchestrator | Wednesday 19 February 2025 08:40:56 +0000 (0:00:01.404) 0:07:52.586 **** 2025-02-19 08:40:57.886203 | orchestrator | ok: [testbed-manager] 2025-02-19 08:40:57.887317 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:40:57.888114 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:40:57.889757 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:40:57.890967 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:40:57.892175 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:40:57.892902 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:40:57.894091 | orchestrator | 2025-02-19 08:40:57.894613 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-02-19 08:40:57.895373 | orchestrator | Wednesday 19 February 2025 08:40:57 +0000 (0:00:01.742) 0:07:54.328 **** 2025-02-19 08:40:58.014744 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:40:58.087229 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:40:58.153457 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:40:58.220137 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:40:58.291379 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:40:58.709532 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:40:58.709891 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:40:58.711010 | orchestrator | 2025-02-19 08:40:58.712488 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-02-19 08:40:58.713182 | orchestrator | Wednesday 19 February 2025 08:40:58 +0000 (0:00:00.823) 0:07:55.152 **** 2025-02-19 08:41:00.036329 | orchestrator | changed: [testbed-manager] 2025-02-19 08:41:00.037141 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:41:00.037262 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:41:00.038113 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:41:00.039205 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:41:00.040297 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:41:00.040909 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:41:00.041901 | orchestrator | 2025-02-19 08:41:00.042782 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-02-19 08:41:00.043381 | orchestrator | 2025-02-19 08:41:00.043904 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-02-19 08:41:00.044623 | orchestrator | Wednesday 19 February 2025 08:41:00 +0000 (0:00:01.328) 0:07:56.481 **** 2025-02-19 08:41:01.085702 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:41:01.085888 | orchestrator | 2025-02-19 08:41:01.085920 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-02-19 08:41:01.554361 | orchestrator | Wednesday 19 February 2025 08:41:01 +0000 (0:00:01.049) 0:07:57.530 **** 2025-02-19 08:41:01.554463 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:01.986634 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:41:01.987278 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:41:01.987942 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:41:01.988200 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:41:01.988815 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:41:01.989464 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:41:01.989854 | orchestrator | 2025-02-19 08:41:01.990305 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-02-19 08:41:01.993706 | orchestrator | Wednesday 19 February 2025 08:41:01 +0000 (0:00:00.902) 0:07:58.433 **** 2025-02-19 08:41:03.181623 | orchestrator | changed: [testbed-manager] 2025-02-19 08:41:03.183103 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:41:03.183158 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:41:03.183183 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:41:03.184262 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:41:03.184679 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:41:03.185107 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:41:03.186874 | orchestrator | 2025-02-19 08:41:03.187184 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-02-19 08:41:03.187989 | orchestrator | Wednesday 19 February 2025 08:41:03 +0000 (0:00:01.184) 0:07:59.618 **** 2025-02-19 08:41:04.275447 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:41:04.280006 | orchestrator | 2025-02-19 08:41:04.282784 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-02-19 08:41:04.282849 | orchestrator | Wednesday 19 February 2025 08:41:04 +0000 (0:00:01.101) 0:08:00.719 **** 2025-02-19 08:41:05.153743 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:05.154782 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:41:05.155097 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:41:05.155954 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:41:05.157131 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:41:05.157813 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:41:05.158315 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:41:05.159086 | orchestrator | 2025-02-19 08:41:05.160576 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-02-19 08:41:05.587677 | orchestrator | Wednesday 19 February 2025 08:41:05 +0000 (0:00:00.879) 0:08:01.598 **** 2025-02-19 08:41:05.587818 | orchestrator | changed: [testbed-manager] 2025-02-19 08:41:06.311386 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:41:06.311929 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:41:06.312811 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:41:06.313350 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:41:06.314295 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:41:06.314680 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:41:06.315561 | orchestrator | 2025-02-19 08:41:06.317020 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:41:06.317227 | orchestrator | 2025-02-19 08:41:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:41:06.317681 | orchestrator | 2025-02-19 08:41:06 | INFO  | Please wait and do not abort execution. 2025-02-19 08:41:06.319197 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-02-19 08:41:06.320776 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-19 08:41:06.321196 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-19 08:41:06.321677 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-19 08:41:06.322132 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-02-19 08:41:06.322761 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-19 08:41:06.323210 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-19 08:41:06.323820 | orchestrator | 2025-02-19 08:41:06.324236 | orchestrator | 2025-02-19 08:41:06.326183 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:41:06.326720 | orchestrator | Wednesday 19 February 2025 08:41:06 +0000 (0:00:01.159) 0:08:02.758 **** 2025-02-19 08:41:06.326958 | orchestrator | =============================================================================== 2025-02-19 08:41:06.327717 | orchestrator | osism.commons.packages : Install required packages --------------------- 69.00s 2025-02-19 08:41:06.328189 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.47s 2025-02-19 08:41:06.328630 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.30s 2025-02-19 08:41:06.329048 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.16s 2025-02-19 08:41:06.329342 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.39s 2025-02-19 08:41:06.329681 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.12s 2025-02-19 08:41:06.329884 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.33s 2025-02-19 08:41:06.330138 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.67s 2025-02-19 08:41:06.330421 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.48s 2025-02-19 08:41:06.330691 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.29s 2025-02-19 08:41:06.330927 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.79s 2025-02-19 08:41:06.331129 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.58s 2025-02-19 08:41:06.331513 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.87s 2025-02-19 08:41:06.331743 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.74s 2025-02-19 08:41:06.331931 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.39s 2025-02-19 08:41:06.332190 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.26s 2025-02-19 08:41:06.332461 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.21s 2025-02-19 08:41:06.332767 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.21s 2025-02-19 08:41:06.332964 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.65s 2025-02-19 08:41:06.333234 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.48s 2025-02-19 08:41:07.070394 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-02-19 08:41:09.181053 | orchestrator | + osism apply network 2025-02-19 08:41:09.181149 | orchestrator | 2025-02-19 08:41:09 | INFO  | Task 4c019a98-6a7a-4665-a3b6-53e863b123a5 (network) was prepared for execution. 2025-02-19 08:41:12.898010 | orchestrator | 2025-02-19 08:41:09 | INFO  | It takes a moment until task 4c019a98-6a7a-4665-a3b6-53e863b123a5 (network) has been started and output is visible here. 2025-02-19 08:41:12.898214 | orchestrator | 2025-02-19 08:41:12.902006 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-02-19 08:41:12.902123 | orchestrator | 2025-02-19 08:41:12.902195 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-02-19 08:41:12.902288 | orchestrator | Wednesday 19 February 2025 08:41:12 +0000 (0:00:00.285) 0:00:00.285 **** 2025-02-19 08:41:12.994652 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-02-19 08:41:13.066981 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:13.144786 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:41:13.236638 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:41:13.314450 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:41:13.393986 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:41:13.637750 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:41:13.639058 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:41:13.640764 | orchestrator | 2025-02-19 08:41:13.642515 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-02-19 08:41:13.642718 | orchestrator | Wednesday 19 February 2025 08:41:13 +0000 (0:00:00.740) 0:00:01.025 **** 2025-02-19 08:41:14.932444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:41:14.933292 | orchestrator | 2025-02-19 08:41:14.933339 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-02-19 08:41:14.936406 | orchestrator | Wednesday 19 February 2025 08:41:14 +0000 (0:00:01.293) 0:00:02.318 **** 2025-02-19 08:41:16.991934 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:16.992241 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:41:16.992980 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:41:16.994130 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:41:16.994549 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:41:16.998195 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:41:16.999844 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:41:17.000259 | orchestrator | 2025-02-19 08:41:17.001641 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-02-19 08:41:17.002567 | orchestrator | Wednesday 19 February 2025 08:41:16 +0000 (0:00:02.064) 0:00:04.383 **** 2025-02-19 08:41:18.796433 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:18.797126 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:41:18.798860 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:41:18.799526 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:41:18.800878 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:41:18.801976 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:41:18.802641 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:41:18.803912 | orchestrator | 2025-02-19 08:41:18.804331 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-02-19 08:41:18.804892 | orchestrator | Wednesday 19 February 2025 08:41:18 +0000 (0:00:01.800) 0:00:06.183 **** 2025-02-19 08:41:19.377880 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-02-19 08:41:19.378297 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-02-19 08:41:19.482907 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-02-19 08:41:19.483031 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-02-19 08:41:20.007362 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-02-19 08:41:20.009424 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-02-19 08:41:20.010796 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-02-19 08:41:20.011512 | orchestrator | 2025-02-19 08:41:20.012498 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-02-19 08:41:20.013339 | orchestrator | Wednesday 19 February 2025 08:41:19 +0000 (0:00:01.211) 0:00:07.395 **** 2025-02-19 08:41:22.393169 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 08:41:22.393344 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-19 08:41:22.393540 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-19 08:41:22.394715 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-19 08:41:22.397950 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-19 08:41:22.399325 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-19 08:41:22.401469 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-19 08:41:22.401633 | orchestrator | 2025-02-19 08:41:22.403158 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-02-19 08:41:22.403619 | orchestrator | Wednesday 19 February 2025 08:41:22 +0000 (0:00:02.386) 0:00:09.781 **** 2025-02-19 08:41:24.137233 | orchestrator | changed: [testbed-manager] 2025-02-19 08:41:24.137761 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:41:24.137888 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:41:24.137966 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:41:24.141549 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:41:24.660739 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:41:24.660861 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:41:24.660881 | orchestrator | 2025-02-19 08:41:24.660897 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-02-19 08:41:24.660913 | orchestrator | Wednesday 19 February 2025 08:41:24 +0000 (0:00:01.742) 0:00:11.523 **** 2025-02-19 08:41:24.660944 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-19 08:41:24.762326 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 08:41:25.209573 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-19 08:41:25.210553 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-19 08:41:25.210601 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-19 08:41:25.215676 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-19 08:41:25.216617 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-19 08:41:25.216636 | orchestrator | 2025-02-19 08:41:25.216650 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-02-19 08:41:25.217087 | orchestrator | Wednesday 19 February 2025 08:41:25 +0000 (0:00:01.076) 0:00:12.599 **** 2025-02-19 08:41:25.840498 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:25.937984 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:41:26.377213 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:41:26.377859 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:41:26.379017 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:41:26.382794 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:41:26.545396 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:41:26.545511 | orchestrator | 2025-02-19 08:41:26.545532 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-02-19 08:41:26.545549 | orchestrator | Wednesday 19 February 2025 08:41:26 +0000 (0:00:01.164) 0:00:13.764 **** 2025-02-19 08:41:26.545637 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:41:26.631397 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:41:26.716419 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:41:26.807388 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:41:26.887678 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:41:27.036392 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:41:27.036905 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:41:27.037681 | orchestrator | 2025-02-19 08:41:27.040829 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-02-19 08:41:29.547982 | orchestrator | Wednesday 19 February 2025 08:41:27 +0000 (0:00:00.657) 0:00:14.422 **** 2025-02-19 08:41:29.548108 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:29.551718 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:41:29.551827 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:41:29.551851 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:41:29.551872 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:41:29.553282 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:41:29.555013 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:41:29.556007 | orchestrator | 2025-02-19 08:41:29.556776 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-02-19 08:41:29.557637 | orchestrator | Wednesday 19 February 2025 08:41:29 +0000 (0:00:02.510) 0:00:16.933 **** 2025-02-19 08:41:29.814853 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:41:29.901363 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:41:29.994246 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:41:30.080340 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:41:30.510158 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:41:30.510575 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:41:30.511685 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-02-19 08:41:30.512870 | orchestrator | 2025-02-19 08:41:30.513882 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-02-19 08:41:30.514980 | orchestrator | Wednesday 19 February 2025 08:41:30 +0000 (0:00:00.966) 0:00:17.899 **** 2025-02-19 08:41:32.344052 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:41:32.344218 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:41:32.345881 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:32.348316 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:41:32.353466 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:41:32.355324 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:41:32.356075 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:41:32.357087 | orchestrator | 2025-02-19 08:41:32.358204 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-02-19 08:41:32.358577 | orchestrator | Wednesday 19 February 2025 08:41:32 +0000 (0:00:01.829) 0:00:19.728 **** 2025-02-19 08:41:33.663919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:41:33.664797 | orchestrator | 2025-02-19 08:41:33.664852 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-02-19 08:41:33.665804 | orchestrator | Wednesday 19 February 2025 08:41:33 +0000 (0:00:01.317) 0:00:21.046 **** 2025-02-19 08:41:34.466934 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:34.951533 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:41:34.952393 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:41:34.953051 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:41:34.959773 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:41:35.131440 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:41:35.131544 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:41:35.131555 | orchestrator | 2025-02-19 08:41:35.131565 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-02-19 08:41:35.131575 | orchestrator | Wednesday 19 February 2025 08:41:34 +0000 (0:00:01.290) 0:00:22.336 **** 2025-02-19 08:41:35.131645 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:35.220179 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:41:35.310293 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:41:35.393852 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:41:35.480956 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:41:35.628172 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:41:35.629882 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:41:36.334082 | orchestrator | 2025-02-19 08:41:36.334194 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-02-19 08:41:36.334207 | orchestrator | Wednesday 19 February 2025 08:41:35 +0000 (0:00:00.682) 0:00:23.019 **** 2025-02-19 08:41:36.334229 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-19 08:41:36.334441 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-02-19 08:41:36.338819 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-19 08:41:36.340027 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-02-19 08:41:36.340157 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-19 08:41:36.340916 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-02-19 08:41:36.341941 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-19 08:41:36.344852 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-02-19 08:41:36.442533 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-19 08:41:36.443427 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-02-19 08:41:36.957071 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-19 08:41:36.957383 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-02-19 08:41:36.958779 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-19 08:41:36.959602 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-02-19 08:41:36.960269 | orchestrator | 2025-02-19 08:41:36.961097 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-02-19 08:41:36.961724 | orchestrator | Wednesday 19 February 2025 08:41:36 +0000 (0:00:01.319) 0:00:24.339 **** 2025-02-19 08:41:37.146741 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:41:37.232031 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:41:37.320908 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:41:37.403152 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:41:37.480770 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:41:37.608669 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:41:37.609345 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:41:37.610519 | orchestrator | 2025-02-19 08:41:37.613567 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-02-19 08:41:38.476911 | orchestrator | Wednesday 19 February 2025 08:41:37 +0000 (0:00:00.660) 0:00:24.999 **** 2025-02-19 08:41:38.477056 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:41:39.887500 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:41:39.888819 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:41:39.892504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-2 2025-02-19 08:41:43.514270 | orchestrator | 2025-02-19 08:41:43.514390 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-02-19 08:41:43.514412 | orchestrator | Wednesday 19 February 2025 08:41:39 +0000 (0:00:02.273) 0:00:27.272 **** 2025-02-19 08:41:43.514446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.11', 'vni': 42}}) 2025-02-19 08:41:43.515504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.10', 'vni': 42}}) 2025-02-19 08:41:43.516129 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'vni': 42}}) 2025-02-19 08:41:43.517321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.12', 'vni': 42}}) 2025-02-19 08:41:43.517345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.11', 'vni': 23}}) 2025-02-19 08:41:43.518865 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'vni': 23}}) 2025-02-19 08:41:43.519209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.10', 'vni': 23}}) 2025-02-19 08:41:43.519630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.12', 'vni': 23}}) 2025-02-19 08:41:43.519971 | orchestrator | 2025-02-19 08:41:43.520679 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-02-19 08:41:43.521069 | orchestrator | Wednesday 19 February 2025 08:41:43 +0000 (0:00:03.626) 0:00:30.899 **** 2025-02-19 08:41:46.863432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.10', 'vni': 42}}) 2025-02-19 08:41:46.864467 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'vni': 42}}) 2025-02-19 08:41:46.864525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.11', 'vni': 42}}) 2025-02-19 08:41:46.864549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.12', 'vni': 42}}) 2025-02-19 08:41:46.864612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.10', 'vni': 23}}) 2025-02-19 08:41:46.864639 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'vni': 23}}) 2025-02-19 08:41:46.867552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.11', 'vni': 23}}) 2025-02-19 08:41:46.867613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.12', 'vni': 23}}) 2025-02-19 08:41:48.249405 | orchestrator | 2025-02-19 08:41:48.249520 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-02-19 08:41:48.249541 | orchestrator | Wednesday 19 February 2025 08:41:46 +0000 (0:00:03.348) 0:00:34.247 **** 2025-02-19 08:41:48.249572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:41:48.249722 | orchestrator | 2025-02-19 08:41:48.250811 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-02-19 08:41:48.251895 | orchestrator | Wednesday 19 February 2025 08:41:48 +0000 (0:00:01.388) 0:00:35.635 **** 2025-02-19 08:41:48.762307 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:48.864710 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:41:49.498253 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:41:49.499479 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:41:49.499865 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:41:49.501103 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:41:49.502775 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:41:49.503064 | orchestrator | 2025-02-19 08:41:49.504133 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-02-19 08:41:49.504933 | orchestrator | Wednesday 19 February 2025 08:41:49 +0000 (0:00:01.250) 0:00:36.885 **** 2025-02-19 08:41:49.597802 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-02-19 08:41:49.597945 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-02-19 08:41:49.597968 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-02-19 08:41:49.598784 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-02-19 08:41:49.696765 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:41:49.696938 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-02-19 08:41:49.697877 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-02-19 08:41:49.699002 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-02-19 08:41:49.700573 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-02-19 08:41:49.808852 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:41:49.809073 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-02-19 08:41:49.810981 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-02-19 08:41:49.812495 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-02-19 08:41:49.812904 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-02-19 08:41:49.919434 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:41:49.920451 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-02-19 08:41:49.921224 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-02-19 08:41:49.925439 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-02-19 08:41:49.926424 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-02-19 08:41:50.008436 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:41:50.085353 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:41:51.322499 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:41:51.322918 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:41:51.322967 | orchestrator | 2025-02-19 08:41:51.323000 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-02-19 08:41:51.325413 | orchestrator | Wednesday 19 February 2025 08:41:51 +0000 (0:00:01.822) 0:00:38.708 **** 2025-02-19 08:41:51.533550 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:41:51.619219 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:41:51.997984 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:41:51.998220 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:41:51.999368 | orchestrator | 2025-02-19 08:41:51.999858 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-02-19 08:41:52.000683 | orchestrator | Wednesday 19 February 2025 08:41:51 +0000 (0:00:00.677) 0:00:39.385 **** 2025-02-19 08:41:52.180172 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:41:52.268386 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:41:52.356345 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:41:52.449542 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:41:52.534134 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:41:52.577563 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:41:52.579255 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:41:52.579763 | orchestrator | 2025-02-19 08:41:52.580907 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:41:52.581073 | orchestrator | 2025-02-19 08:41:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:41:52.583135 | orchestrator | 2025-02-19 08:41:52 | INFO  | Please wait and do not abort execution. 2025-02-19 08:41:52.583233 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 08:41:52.587823 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:41:52.587874 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:41:52.587893 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:41:52.589474 | orchestrator | testbed-node-3 : ok=17  changed=3  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:41:52.592297 | orchestrator | testbed-node-4 : ok=17  changed=3  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:41:52.593156 | orchestrator | testbed-node-5 : ok=17  changed=3  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:41:52.594451 | orchestrator | 2025-02-19 08:41:52.595249 | orchestrator | 2025-02-19 08:41:52.595697 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:41:52.596210 | orchestrator | Wednesday 19 February 2025 08:41:52 +0000 (0:00:00.581) 0:00:39.967 **** 2025-02-19 08:41:52.596813 | orchestrator | =============================================================================== 2025-02-19 08:41:52.597366 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 3.63s 2025-02-19 08:41:52.598122 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 3.35s 2025-02-19 08:41:52.598443 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.51s 2025-02-19 08:41:52.599328 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.39s 2025-02-19 08:41:52.599832 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 2.27s 2025-02-19 08:41:52.601156 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.06s 2025-02-19 08:41:52.601342 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.83s 2025-02-19 08:41:52.602073 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.82s 2025-02-19 08:41:52.602797 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.80s 2025-02-19 08:41:52.602971 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.74s 2025-02-19 08:41:52.603722 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.39s 2025-02-19 08:41:52.604115 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.32s 2025-02-19 08:41:52.605385 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.32s 2025-02-19 08:41:52.605768 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.29s 2025-02-19 08:41:52.606252 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.29s 2025-02-19 08:41:52.606939 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.25s 2025-02-19 08:41:52.607226 | orchestrator | osism.commons.network : Create required directories --------------------- 1.21s 2025-02-19 08:41:52.607729 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.16s 2025-02-19 08:41:52.608425 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.08s 2025-02-19 08:41:52.608889 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.97s 2025-02-19 08:41:53.185249 | orchestrator | + osism apply wireguard 2025-02-19 08:41:54.640860 | orchestrator | 2025-02-19 08:41:54 | INFO  | Task 92f3beb0-2231-4e31-b00f-cc2abaa94aa2 (wireguard) was prepared for execution. 2025-02-19 08:41:57.966091 | orchestrator | 2025-02-19 08:41:54 | INFO  | It takes a moment until task 92f3beb0-2231-4e31-b00f-cc2abaa94aa2 (wireguard) has been started and output is visible here. 2025-02-19 08:41:57.966240 | orchestrator | 2025-02-19 08:41:57.968628 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-02-19 08:41:57.968672 | orchestrator | 2025-02-19 08:41:57.969729 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-02-19 08:41:57.969761 | orchestrator | Wednesday 19 February 2025 08:41:57 +0000 (0:00:00.190) 0:00:00.190 **** 2025-02-19 08:41:59.520909 | orchestrator | ok: [testbed-manager] 2025-02-19 08:41:59.521620 | orchestrator | 2025-02-19 08:41:59.521688 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-02-19 08:41:59.521733 | orchestrator | Wednesday 19 February 2025 08:41:59 +0000 (0:00:01.555) 0:00:01.746 **** 2025-02-19 08:42:06.235473 | orchestrator | changed: [testbed-manager] 2025-02-19 08:42:06.235957 | orchestrator | 2025-02-19 08:42:06.236786 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-02-19 08:42:06.237088 | orchestrator | Wednesday 19 February 2025 08:42:06 +0000 (0:00:06.715) 0:00:08.461 **** 2025-02-19 08:42:06.921226 | orchestrator | changed: [testbed-manager] 2025-02-19 08:42:06.921843 | orchestrator | 2025-02-19 08:42:06.923102 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-02-19 08:42:06.923754 | orchestrator | Wednesday 19 February 2025 08:42:06 +0000 (0:00:00.686) 0:00:09.148 **** 2025-02-19 08:42:07.338365 | orchestrator | changed: [testbed-manager] 2025-02-19 08:42:07.891268 | orchestrator | 2025-02-19 08:42:07.891398 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-02-19 08:42:07.891427 | orchestrator | Wednesday 19 February 2025 08:42:07 +0000 (0:00:00.412) 0:00:09.561 **** 2025-02-19 08:42:07.891475 | orchestrator | ok: [testbed-manager] 2025-02-19 08:42:07.891556 | orchestrator | 2025-02-19 08:42:07.892713 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-02-19 08:42:08.473383 | orchestrator | Wednesday 19 February 2025 08:42:07 +0000 (0:00:00.555) 0:00:10.117 **** 2025-02-19 08:42:08.473520 | orchestrator | ok: [testbed-manager] 2025-02-19 08:42:08.474968 | orchestrator | 2025-02-19 08:42:08.475458 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-02-19 08:42:08.476455 | orchestrator | Wednesday 19 February 2025 08:42:08 +0000 (0:00:00.581) 0:00:10.699 **** 2025-02-19 08:42:08.928309 | orchestrator | ok: [testbed-manager] 2025-02-19 08:42:08.928800 | orchestrator | 2025-02-19 08:42:08.929911 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-02-19 08:42:08.930971 | orchestrator | Wednesday 19 February 2025 08:42:08 +0000 (0:00:00.456) 0:00:11.155 **** 2025-02-19 08:42:10.142111 | orchestrator | changed: [testbed-manager] 2025-02-19 08:42:10.142326 | orchestrator | 2025-02-19 08:42:10.143798 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-02-19 08:42:10.145490 | orchestrator | Wednesday 19 February 2025 08:42:10 +0000 (0:00:01.212) 0:00:12.368 **** 2025-02-19 08:42:11.090682 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-19 08:42:11.090878 | orchestrator | changed: [testbed-manager] 2025-02-19 08:42:11.092372 | orchestrator | 2025-02-19 08:42:11.093760 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-02-19 08:42:11.094485 | orchestrator | Wednesday 19 February 2025 08:42:11 +0000 (0:00:00.948) 0:00:13.316 **** 2025-02-19 08:42:12.860523 | orchestrator | changed: [testbed-manager] 2025-02-19 08:42:12.861342 | orchestrator | 2025-02-19 08:42:12.864246 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-02-19 08:42:12.866558 | orchestrator | Wednesday 19 February 2025 08:42:12 +0000 (0:00:01.770) 0:00:15.086 **** 2025-02-19 08:42:13.825551 | orchestrator | changed: [testbed-manager] 2025-02-19 08:42:13.826141 | orchestrator | 2025-02-19 08:42:13.826829 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:42:13.826871 | orchestrator | 2025-02-19 08:42:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:42:13.827789 | orchestrator | 2025-02-19 08:42:13 | INFO  | Please wait and do not abort execution. 2025-02-19 08:42:13.827870 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:42:13.828756 | orchestrator | 2025-02-19 08:42:13.828893 | orchestrator | 2025-02-19 08:42:13.829051 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:42:13.829536 | orchestrator | Wednesday 19 February 2025 08:42:13 +0000 (0:00:00.966) 0:00:16.054 **** 2025-02-19 08:42:13.830081 | orchestrator | =============================================================================== 2025-02-19 08:42:13.830194 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.72s 2025-02-19 08:42:13.830641 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.77s 2025-02-19 08:42:13.831251 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.56s 2025-02-19 08:42:13.831677 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.21s 2025-02-19 08:42:13.832206 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2025-02-19 08:42:13.832540 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.95s 2025-02-19 08:42:13.833261 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.69s 2025-02-19 08:42:13.833476 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.58s 2025-02-19 08:42:13.833933 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2025-02-19 08:42:13.834116 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2025-02-19 08:42:13.834221 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-02-19 08:42:14.399989 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-02-19 08:42:14.435165 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-02-19 08:42:14.480539 | orchestrator | Dload Upload Total Spent Left Speed 2025-02-19 08:42:14.510148 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 186 0 --:--:-- --:--:-- --:--:-- 186 2025-02-19 08:42:14.526075 | orchestrator | + osism apply --environment custom workarounds 2025-02-19 08:42:15.992252 | orchestrator | 2025-02-19 08:42:15 | INFO  | Trying to run play workarounds in environment custom 2025-02-19 08:42:16.040176 | orchestrator | 2025-02-19 08:42:16 | INFO  | Task edb88cbb-c18d-46fb-9f4b-3d2a047c2b5e (workarounds) was prepared for execution. 2025-02-19 08:42:19.606417 | orchestrator | 2025-02-19 08:42:16 | INFO  | It takes a moment until task edb88cbb-c18d-46fb-9f4b-3d2a047c2b5e (workarounds) has been started and output is visible here. 2025-02-19 08:42:19.606561 | orchestrator | 2025-02-19 08:42:19.607293 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 08:42:19.607557 | orchestrator | 2025-02-19 08:42:19.608603 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-02-19 08:42:19.610366 | orchestrator | Wednesday 19 February 2025 08:42:19 +0000 (0:00:00.151) 0:00:00.151 **** 2025-02-19 08:42:19.774409 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-02-19 08:42:19.862245 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-02-19 08:42:19.965473 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-02-19 08:42:20.070306 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-02-19 08:42:20.272713 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-02-19 08:42:20.435507 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-02-19 08:42:20.435799 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-02-19 08:42:20.437439 | orchestrator | 2025-02-19 08:42:20.438972 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-02-19 08:42:20.439640 | orchestrator | 2025-02-19 08:42:20.440531 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-02-19 08:42:20.441365 | orchestrator | Wednesday 19 February 2025 08:42:20 +0000 (0:00:00.832) 0:00:00.983 **** 2025-02-19 08:42:23.324064 | orchestrator | ok: [testbed-manager] 2025-02-19 08:42:23.324759 | orchestrator | 2025-02-19 08:42:23.327736 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-02-19 08:42:23.329728 | orchestrator | 2025-02-19 08:42:23.331404 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-02-19 08:42:23.331875 | orchestrator | Wednesday 19 February 2025 08:42:23 +0000 (0:00:02.885) 0:00:03.869 **** 2025-02-19 08:42:25.191885 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:42:25.192711 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:42:25.194274 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:42:25.194361 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:42:25.195852 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:42:25.196900 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:42:25.197664 | orchestrator | 2025-02-19 08:42:25.198411 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-02-19 08:42:25.198897 | orchestrator | 2025-02-19 08:42:25.200291 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-02-19 08:42:25.201691 | orchestrator | Wednesday 19 February 2025 08:42:25 +0000 (0:00:01.867) 0:00:05.736 **** 2025-02-19 08:42:26.740217 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-19 08:42:26.741139 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-19 08:42:26.741177 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-19 08:42:26.741201 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-19 08:42:26.741258 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-19 08:42:26.742251 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-19 08:42:26.742416 | orchestrator | 2025-02-19 08:42:26.742909 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-02-19 08:42:26.743615 | orchestrator | Wednesday 19 February 2025 08:42:26 +0000 (0:00:01.546) 0:00:07.283 **** 2025-02-19 08:42:30.443285 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:42:30.443543 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:42:30.443578 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:42:30.444416 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:42:30.444452 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:42:30.445042 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:42:30.447380 | orchestrator | 2025-02-19 08:42:30.447994 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-02-19 08:42:30.448470 | orchestrator | Wednesday 19 February 2025 08:42:30 +0000 (0:00:03.709) 0:00:10.992 **** 2025-02-19 08:42:30.603739 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:42:30.683461 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:42:30.763496 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:42:30.845383 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:42:31.182554 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:42:31.182898 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:42:31.184287 | orchestrator | 2025-02-19 08:42:31.184676 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-02-19 08:42:31.185703 | orchestrator | 2025-02-19 08:42:31.186528 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-02-19 08:42:31.187580 | orchestrator | Wednesday 19 February 2025 08:42:31 +0000 (0:00:00.737) 0:00:11.730 **** 2025-02-19 08:42:33.008229 | orchestrator | changed: [testbed-manager] 2025-02-19 08:42:33.008412 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:42:33.009344 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:42:33.011392 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:42:33.013078 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:42:33.013730 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:42:33.014580 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:42:33.018095 | orchestrator | 2025-02-19 08:42:33.018780 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-02-19 08:42:33.021311 | orchestrator | Wednesday 19 February 2025 08:42:33 +0000 (0:00:01.825) 0:00:13.555 **** 2025-02-19 08:42:34.863410 | orchestrator | changed: [testbed-manager] 2025-02-19 08:42:34.863587 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:42:34.865182 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:42:34.865281 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:42:34.869565 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:42:34.870794 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:42:34.870996 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:42:34.871498 | orchestrator | 2025-02-19 08:42:34.872718 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-02-19 08:42:34.873009 | orchestrator | Wednesday 19 February 2025 08:42:34 +0000 (0:00:01.852) 0:00:15.408 **** 2025-02-19 08:42:36.406992 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:42:36.408133 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:42:36.408247 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:42:36.408273 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:42:36.409067 | orchestrator | ok: [testbed-manager] 2025-02-19 08:42:36.411787 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:42:36.413534 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:42:36.414217 | orchestrator | 2025-02-19 08:42:36.415087 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-02-19 08:42:36.416297 | orchestrator | Wednesday 19 February 2025 08:42:36 +0000 (0:00:01.546) 0:00:16.954 **** 2025-02-19 08:42:38.204573 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:42:38.205220 | orchestrator | changed: [testbed-manager] 2025-02-19 08:42:38.205641 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:42:38.207049 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:42:38.207988 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:42:38.208918 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:42:38.209825 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:42:38.211087 | orchestrator | 2025-02-19 08:42:38.211542 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-02-19 08:42:38.212894 | orchestrator | Wednesday 19 February 2025 08:42:38 +0000 (0:00:01.794) 0:00:18.748 **** 2025-02-19 08:42:38.369422 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:42:38.460247 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:42:38.548108 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:42:38.625586 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:42:38.703859 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:42:38.846304 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:42:38.846630 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:42:38.847490 | orchestrator | 2025-02-19 08:42:38.848736 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-02-19 08:42:38.849051 | orchestrator | 2025-02-19 08:42:38.849551 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-02-19 08:42:38.849949 | orchestrator | Wednesday 19 February 2025 08:42:38 +0000 (0:00:00.645) 0:00:19.394 **** 2025-02-19 08:42:41.570409 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:42:41.571150 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:42:41.571937 | orchestrator | ok: [testbed-manager] 2025-02-19 08:42:41.572253 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:42:41.573019 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:42:41.574440 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:42:41.574992 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:42:41.575769 | orchestrator | 2025-02-19 08:42:41.576050 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:42:41.576523 | orchestrator | 2025-02-19 08:42:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:42:41.576772 | orchestrator | 2025-02-19 08:42:41 | INFO  | Please wait and do not abort execution. 2025-02-19 08:42:41.577629 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:42:41.578355 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:41.578926 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:41.579480 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:41.580218 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:41.580527 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:41.581071 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:41.581719 | orchestrator | 2025-02-19 08:42:41.582120 | orchestrator | 2025-02-19 08:42:41.582538 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:42:41.582952 | orchestrator | Wednesday 19 February 2025 08:42:41 +0000 (0:00:02.724) 0:00:22.118 **** 2025-02-19 08:42:41.583349 | orchestrator | =============================================================================== 2025-02-19 08:42:41.584173 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.71s 2025-02-19 08:42:41.584515 | orchestrator | Apply netplan configuration --------------------------------------------- 2.89s 2025-02-19 08:42:41.584833 | orchestrator | Install python3-docker -------------------------------------------------- 2.72s 2025-02-19 08:42:41.585176 | orchestrator | Apply netplan configuration --------------------------------------------- 1.87s 2025-02-19 08:42:41.585482 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.85s 2025-02-19 08:42:41.586159 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.83s 2025-02-19 08:42:41.586855 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.79s 2025-02-19 08:42:41.587016 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.55s 2025-02-19 08:42:41.587377 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.55s 2025-02-19 08:42:41.587746 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.83s 2025-02-19 08:42:41.588031 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.74s 2025-02-19 08:42:41.588308 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2025-02-19 08:42:42.156463 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-02-19 08:42:43.605908 | orchestrator | 2025-02-19 08:42:43 | INFO  | Task 08ea8a15-adb3-4f1d-b902-3872d18d0e79 (reboot) was prepared for execution. 2025-02-19 08:42:46.886212 | orchestrator | 2025-02-19 08:42:43 | INFO  | It takes a moment until task 08ea8a15-adb3-4f1d-b902-3872d18d0e79 (reboot) has been started and output is visible here. 2025-02-19 08:42:46.886331 | orchestrator | 2025-02-19 08:42:46.888921 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-19 08:42:46.888945 | orchestrator | 2025-02-19 08:42:46.890317 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-19 08:42:46.890339 | orchestrator | Wednesday 19 February 2025 08:42:46 +0000 (0:00:00.154) 0:00:00.154 **** 2025-02-19 08:42:46.985902 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:42:46.986701 | orchestrator | 2025-02-19 08:42:46.987780 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-19 08:42:46.988622 | orchestrator | Wednesday 19 February 2025 08:42:46 +0000 (0:00:00.100) 0:00:00.255 **** 2025-02-19 08:42:47.912395 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:42:47.912718 | orchestrator | 2025-02-19 08:42:47.913506 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-19 08:42:47.914357 | orchestrator | Wednesday 19 February 2025 08:42:47 +0000 (0:00:00.927) 0:00:01.182 **** 2025-02-19 08:42:48.033411 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:42:48.033864 | orchestrator | 2025-02-19 08:42:48.035506 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-19 08:42:48.036761 | orchestrator | 2025-02-19 08:42:48.037377 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-19 08:42:48.037824 | orchestrator | Wednesday 19 February 2025 08:42:48 +0000 (0:00:00.120) 0:00:01.303 **** 2025-02-19 08:42:48.129660 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:42:48.131668 | orchestrator | 2025-02-19 08:42:48.132321 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-19 08:42:48.132359 | orchestrator | Wednesday 19 February 2025 08:42:48 +0000 (0:00:00.097) 0:00:01.400 **** 2025-02-19 08:42:48.831574 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:42:48.831938 | orchestrator | 2025-02-19 08:42:48.832009 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-19 08:42:48.833086 | orchestrator | Wednesday 19 February 2025 08:42:48 +0000 (0:00:00.699) 0:00:02.099 **** 2025-02-19 08:42:48.953536 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:42:48.953837 | orchestrator | 2025-02-19 08:42:48.954880 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-19 08:42:48.955432 | orchestrator | 2025-02-19 08:42:48.957048 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-19 08:42:49.041202 | orchestrator | Wednesday 19 February 2025 08:42:48 +0000 (0:00:00.121) 0:00:02.221 **** 2025-02-19 08:42:49.041355 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:42:49.041837 | orchestrator | 2025-02-19 08:42:49.042428 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-19 08:42:49.043337 | orchestrator | Wednesday 19 February 2025 08:42:49 +0000 (0:00:00.090) 0:00:02.312 **** 2025-02-19 08:42:49.818665 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:42:49.819347 | orchestrator | 2025-02-19 08:42:49.819385 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-19 08:42:49.819410 | orchestrator | Wednesday 19 February 2025 08:42:49 +0000 (0:00:00.772) 0:00:03.085 **** 2025-02-19 08:42:49.930807 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:42:49.931141 | orchestrator | 2025-02-19 08:42:49.932197 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-19 08:42:49.934478 | orchestrator | 2025-02-19 08:42:49.934927 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-19 08:42:49.934959 | orchestrator | Wednesday 19 February 2025 08:42:49 +0000 (0:00:00.116) 0:00:03.201 **** 2025-02-19 08:42:50.034477 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:42:50.034743 | orchestrator | 2025-02-19 08:42:50.035965 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-19 08:42:50.036812 | orchestrator | Wednesday 19 February 2025 08:42:50 +0000 (0:00:00.101) 0:00:03.303 **** 2025-02-19 08:42:50.708686 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:42:50.708954 | orchestrator | 2025-02-19 08:42:50.709862 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-19 08:42:50.710408 | orchestrator | Wednesday 19 February 2025 08:42:50 +0000 (0:00:00.676) 0:00:03.979 **** 2025-02-19 08:42:50.811104 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:42:50.812451 | orchestrator | 2025-02-19 08:42:50.813129 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-19 08:42:50.813694 | orchestrator | 2025-02-19 08:42:50.814368 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-19 08:42:50.815155 | orchestrator | Wednesday 19 February 2025 08:42:50 +0000 (0:00:00.100) 0:00:04.080 **** 2025-02-19 08:42:50.912146 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:42:50.913071 | orchestrator | 2025-02-19 08:42:50.913809 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-19 08:42:50.915238 | orchestrator | Wednesday 19 February 2025 08:42:50 +0000 (0:00:00.102) 0:00:04.182 **** 2025-02-19 08:42:51.646141 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:42:51.647879 | orchestrator | 2025-02-19 08:42:51.647940 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-19 08:42:51.757839 | orchestrator | Wednesday 19 February 2025 08:42:51 +0000 (0:00:00.731) 0:00:04.913 **** 2025-02-19 08:42:51.758127 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:42:51.758259 | orchestrator | 2025-02-19 08:42:51.759227 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-19 08:42:51.761101 | orchestrator | 2025-02-19 08:42:51.859288 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-19 08:42:51.859396 | orchestrator | Wednesday 19 February 2025 08:42:51 +0000 (0:00:00.111) 0:00:05.025 **** 2025-02-19 08:42:51.859427 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:42:51.859892 | orchestrator | 2025-02-19 08:42:51.861000 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-19 08:42:51.861504 | orchestrator | Wednesday 19 February 2025 08:42:51 +0000 (0:00:00.101) 0:00:05.127 **** 2025-02-19 08:42:52.549904 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:42:52.550723 | orchestrator | 2025-02-19 08:42:52.578567 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-19 08:42:52.578654 | orchestrator | Wednesday 19 February 2025 08:42:52 +0000 (0:00:00.693) 0:00:05.820 **** 2025-02-19 08:42:52.578697 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:42:52.578737 | orchestrator | 2025-02-19 08:42:52.579430 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:42:52.579450 | orchestrator | 2025-02-19 08:42:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:42:52.579716 | orchestrator | 2025-02-19 08:42:52 | INFO  | Please wait and do not abort execution. 2025-02-19 08:42:52.579730 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:52.580182 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:52.580588 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:52.581356 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:52.583417 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:52.583464 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:42:52.583491 | orchestrator | 2025-02-19 08:42:52.583498 | orchestrator | 2025-02-19 08:42:52.583505 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:42:52.583512 | orchestrator | Wednesday 19 February 2025 08:42:52 +0000 (0:00:00.029) 0:00:05.850 **** 2025-02-19 08:42:52.583518 | orchestrator | =============================================================================== 2025-02-19 08:42:52.583525 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.50s 2025-02-19 08:42:52.583531 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.60s 2025-02-19 08:42:52.583552 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.59s 2025-02-19 08:42:53.082824 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-02-19 08:42:54.576758 | orchestrator | 2025-02-19 08:42:54 | INFO  | Task 21d963b6-09f6-4321-866e-dba2295b6de4 (wait-for-connection) was prepared for execution. 2025-02-19 08:42:58.130722 | orchestrator | 2025-02-19 08:42:54 | INFO  | It takes a moment until task 21d963b6-09f6-4321-866e-dba2295b6de4 (wait-for-connection) has been started and output is visible here. 2025-02-19 08:42:58.130862 | orchestrator | 2025-02-19 08:42:58.131420 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-02-19 08:42:58.132645 | orchestrator | 2025-02-19 08:42:58.133362 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-02-19 08:42:58.135036 | orchestrator | Wednesday 19 February 2025 08:42:58 +0000 (0:00:00.207) 0:00:00.207 **** 2025-02-19 08:43:10.922389 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:43:10.923257 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:43:10.923315 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:43:10.923354 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:43:10.924062 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:43:10.924859 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:43:10.925974 | orchestrator | 2025-02-19 08:43:10.926813 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:43:10.927675 | orchestrator | 2025-02-19 08:43:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:43:10.927748 | orchestrator | 2025-02-19 08:43:10 | INFO  | Please wait and do not abort execution. 2025-02-19 08:43:10.929053 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:43:10.930112 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:43:10.930861 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:43:10.931514 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:43:10.932450 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:43:10.932838 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:43:10.933317 | orchestrator | 2025-02-19 08:43:10.933574 | orchestrator | 2025-02-19 08:43:10.934082 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:43:10.934397 | orchestrator | Wednesday 19 February 2025 08:43:10 +0000 (0:00:12.789) 0:00:12.996 **** 2025-02-19 08:43:10.935059 | orchestrator | =============================================================================== 2025-02-19 08:43:10.936088 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.79s 2025-02-19 08:43:11.517147 | orchestrator | + osism apply hddtemp 2025-02-19 08:43:13.056055 | orchestrator | 2025-02-19 08:43:13 | INFO  | Task 3835e743-e13f-41d3-a4f4-eccede3c596f (hddtemp) was prepared for execution. 2025-02-19 08:43:16.351004 | orchestrator | 2025-02-19 08:43:13 | INFO  | It takes a moment until task 3835e743-e13f-41d3-a4f4-eccede3c596f (hddtemp) has been started and output is visible here. 2025-02-19 08:43:16.351162 | orchestrator | 2025-02-19 08:43:16.351388 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-02-19 08:43:16.353791 | orchestrator | 2025-02-19 08:43:16.353843 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-02-19 08:43:16.354976 | orchestrator | Wednesday 19 February 2025 08:43:16 +0000 (0:00:00.205) 0:00:00.205 **** 2025-02-19 08:43:16.505583 | orchestrator | ok: [testbed-manager] 2025-02-19 08:43:16.584385 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:43:16.661064 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:43:16.738713 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:43:16.813215 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:43:17.054051 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:43:17.054483 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:43:17.055145 | orchestrator | 2025-02-19 08:43:17.056456 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-02-19 08:43:17.057757 | orchestrator | Wednesday 19 February 2025 08:43:17 +0000 (0:00:00.703) 0:00:00.908 **** 2025-02-19 08:43:18.243551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:43:18.248062 | orchestrator | 2025-02-19 08:43:20.302238 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-02-19 08:43:20.302368 | orchestrator | Wednesday 19 February 2025 08:43:18 +0000 (0:00:01.188) 0:00:02.097 **** 2025-02-19 08:43:20.302417 | orchestrator | ok: [testbed-manager] 2025-02-19 08:43:20.302518 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:43:20.303283 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:43:20.303553 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:43:20.305433 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:43:20.305706 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:43:20.306089 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:43:20.306440 | orchestrator | 2025-02-19 08:43:20.307062 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-02-19 08:43:20.307252 | orchestrator | Wednesday 19 February 2025 08:43:20 +0000 (0:00:02.061) 0:00:04.158 **** 2025-02-19 08:43:20.933488 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:43:21.043196 | orchestrator | changed: [testbed-manager] 2025-02-19 08:43:21.523894 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:43:21.525031 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:43:21.525073 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:43:21.525095 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:43:21.526298 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:43:21.526352 | orchestrator | 2025-02-19 08:43:21.526376 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-02-19 08:43:21.526923 | orchestrator | Wednesday 19 February 2025 08:43:21 +0000 (0:00:01.213) 0:00:05.371 **** 2025-02-19 08:43:23.009555 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:43:23.009941 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:43:23.011220 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:43:23.014226 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:43:23.016539 | orchestrator | ok: [testbed-manager] 2025-02-19 08:43:23.016811 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:43:23.016840 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:43:23.016860 | orchestrator | 2025-02-19 08:43:23.017130 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-02-19 08:43:23.018371 | orchestrator | Wednesday 19 February 2025 08:43:23 +0000 (0:00:01.493) 0:00:06.865 **** 2025-02-19 08:43:23.419423 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:43:23.504774 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:43:23.587861 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:43:23.676498 | orchestrator | changed: [testbed-manager] 2025-02-19 08:43:23.810840 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:43:23.811413 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:43:23.812345 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:43:23.813220 | orchestrator | 2025-02-19 08:43:23.815924 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-02-19 08:43:36.757346 | orchestrator | Wednesday 19 February 2025 08:43:23 +0000 (0:00:00.799) 0:00:07.665 **** 2025-02-19 08:43:36.757547 | orchestrator | changed: [testbed-manager] 2025-02-19 08:43:36.757695 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:43:36.757719 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:43:36.757734 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:43:36.757753 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:43:36.757807 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:43:36.758251 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:43:36.759130 | orchestrator | 2025-02-19 08:43:36.759337 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-02-19 08:43:36.759871 | orchestrator | Wednesday 19 February 2025 08:43:36 +0000 (0:00:12.941) 0:00:20.606 **** 2025-02-19 08:43:38.309668 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:43:38.311443 | orchestrator | 2025-02-19 08:43:38.311765 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-02-19 08:43:38.315016 | orchestrator | Wednesday 19 February 2025 08:43:38 +0000 (0:00:01.553) 0:00:22.160 **** 2025-02-19 08:43:40.476791 | orchestrator | changed: [testbed-manager] 2025-02-19 08:43:40.477384 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:43:40.477798 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:43:40.478205 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:43:40.478859 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:43:40.479543 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:43:40.480068 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:43:40.480491 | orchestrator | 2025-02-19 08:43:40.481014 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:43:40.481543 | orchestrator | 2025-02-19 08:43:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:43:40.482394 | orchestrator | 2025-02-19 08:43:40 | INFO  | Please wait and do not abort execution. 2025-02-19 08:43:40.482452 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:43:40.482773 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:40.483336 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:40.485155 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:40.485578 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:40.486196 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:40.486843 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:40.487298 | orchestrator | 2025-02-19 08:43:40.487953 | orchestrator | 2025-02-19 08:43:40.488516 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:43:40.489053 | orchestrator | Wednesday 19 February 2025 08:43:40 +0000 (0:00:02.171) 0:00:24.332 **** 2025-02-19 08:43:40.489586 | orchestrator | =============================================================================== 2025-02-19 08:43:40.490133 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.94s 2025-02-19 08:43:40.490674 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.17s 2025-02-19 08:43:40.491140 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.06s 2025-02-19 08:43:40.491732 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.55s 2025-02-19 08:43:40.492156 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.49s 2025-02-19 08:43:40.492658 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.21s 2025-02-19 08:43:40.493119 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.19s 2025-02-19 08:43:40.493825 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.80s 2025-02-19 08:43:40.494934 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.70s 2025-02-19 08:43:41.132129 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-02-19 08:43:42.923186 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-02-19 08:43:42.923585 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-02-19 08:43:42.923608 | orchestrator | + local max_attempts=60 2025-02-19 08:43:42.923668 | orchestrator | + local name=ceph-ansible 2025-02-19 08:43:42.923676 | orchestrator | + local attempt_num=1 2025-02-19 08:43:42.923689 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-02-19 08:43:42.958823 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-19 08:43:42.960611 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-02-19 08:43:42.960661 | orchestrator | + local max_attempts=60 2025-02-19 08:43:42.960668 | orchestrator | + local name=kolla-ansible 2025-02-19 08:43:42.960674 | orchestrator | + local attempt_num=1 2025-02-19 08:43:42.960686 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-02-19 08:43:42.985944 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-19 08:43:42.987049 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-02-19 08:43:42.987091 | orchestrator | + local max_attempts=60 2025-02-19 08:43:42.987105 | orchestrator | + local name=osism-ansible 2025-02-19 08:43:42.987117 | orchestrator | + local attempt_num=1 2025-02-19 08:43:42.987138 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-02-19 08:43:43.021746 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-19 08:43:43.380136 | orchestrator | + [[ true == \t\r\u\e ]] 2025-02-19 08:43:43.380264 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-02-19 08:43:43.380304 | orchestrator | ARA in ceph-ansible already disabled. 2025-02-19 08:43:43.731161 | orchestrator | ARA in kolla-ansible already disabled. 2025-02-19 08:43:44.020671 | orchestrator | ARA in osism-ansible already disabled. 2025-02-19 08:43:44.324425 | orchestrator | ARA in osism-kubernetes already disabled. 2025-02-19 08:43:44.325514 | orchestrator | + osism apply gather-facts 2025-02-19 08:43:45.780543 | orchestrator | 2025-02-19 08:43:45 | INFO  | Task e5cbb14d-54a9-45a1-bf3b-b74948739ec1 (gather-facts) was prepared for execution. 2025-02-19 08:43:49.005498 | orchestrator | 2025-02-19 08:43:45 | INFO  | It takes a moment until task e5cbb14d-54a9-45a1-bf3b-b74948739ec1 (gather-facts) has been started and output is visible here. 2025-02-19 08:43:49.005721 | orchestrator | 2025-02-19 08:43:49.005991 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-19 08:43:49.006082 | orchestrator | 2025-02-19 08:43:49.006396 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-19 08:43:49.008031 | orchestrator | Wednesday 19 February 2025 08:43:48 +0000 (0:00:00.169) 0:00:00.169 **** 2025-02-19 08:43:54.352694 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:43:54.353923 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:43:54.355078 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:43:54.355518 | orchestrator | ok: [testbed-manager] 2025-02-19 08:43:54.356783 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:43:54.357837 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:43:54.361051 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:43:54.361804 | orchestrator | 2025-02-19 08:43:54.361844 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-19 08:43:54.361862 | orchestrator | 2025-02-19 08:43:54.361885 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-19 08:43:54.362146 | orchestrator | Wednesday 19 February 2025 08:43:54 +0000 (0:00:05.353) 0:00:05.523 **** 2025-02-19 08:43:54.510285 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:43:54.588817 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:43:54.669174 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:43:54.751719 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:43:54.836801 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:43:54.882602 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:43:54.882794 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:43:54.884089 | orchestrator | 2025-02-19 08:43:54.885063 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:43:54.885138 | orchestrator | 2025-02-19 08:43:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:43:54.885710 | orchestrator | 2025-02-19 08:43:54 | INFO  | Please wait and do not abort execution. 2025-02-19 08:43:54.886851 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:54.887654 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:54.887783 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:54.888568 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:54.889088 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:54.889463 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:54.890005 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-19 08:43:54.890417 | orchestrator | 2025-02-19 08:43:54.890863 | orchestrator | 2025-02-19 08:43:54.891244 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:43:54.891799 | orchestrator | Wednesday 19 February 2025 08:43:54 +0000 (0:00:00.530) 0:00:06.054 **** 2025-02-19 08:43:54.892100 | orchestrator | =============================================================================== 2025-02-19 08:43:54.892506 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.35s 2025-02-19 08:43:54.893023 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-02-19 08:43:55.489031 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-02-19 08:43:55.502124 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-02-19 08:43:55.520178 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-02-19 08:43:55.534160 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-02-19 08:43:55.546119 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-02-19 08:43:55.563659 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-02-19 08:43:55.584495 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-02-19 08:43:55.606531 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-02-19 08:43:55.627078 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-02-19 08:43:55.639189 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-02-19 08:43:55.660730 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-02-19 08:43:55.675698 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-02-19 08:43:55.697694 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-02-19 08:43:55.715392 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-02-19 08:43:55.735265 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-02-19 08:43:55.756401 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-02-19 08:43:55.776923 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-02-19 08:43:55.797163 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-02-19 08:43:55.820312 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-02-19 08:43:55.841846 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-02-19 08:43:55.858204 | orchestrator | + [[ false == \t\r\u\e ]] 2025-02-19 08:43:56.315163 | orchestrator | changed 2025-02-19 08:43:56.407454 | 2025-02-19 08:43:56.407587 | TASK [Deploy services] 2025-02-19 08:43:56.525643 | orchestrator | skipping: Conditional result was False 2025-02-19 08:43:56.537436 | 2025-02-19 08:43:56.537557 | TASK [Deploy in a nutshell] 2025-02-19 08:43:57.246461 | orchestrator | + set -e 2025-02-19 08:43:57.246700 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-19 08:43:57.246735 | orchestrator | ++ export INTERACTIVE=false 2025-02-19 08:43:57.246754 | orchestrator | ++ INTERACTIVE=false 2025-02-19 08:43:57.246798 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-19 08:43:57.246817 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-19 08:43:57.246832 | orchestrator | + source /opt/manager-vars.sh 2025-02-19 08:43:57.246859 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-19 08:43:57.246883 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-19 08:43:57.246900 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-19 08:43:57.246914 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-19 08:43:57.246928 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-19 08:43:57.246942 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-19 08:43:57.246956 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-19 08:43:57.246970 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-19 08:43:57.246985 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-19 08:43:57.247000 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-19 08:43:57.247024 | orchestrator | ++ export ARA=false 2025-02-19 08:43:57.248752 | orchestrator | ++ ARA=false 2025-02-19 08:43:57.248809 | orchestrator | ++ export TEMPEST=false 2025-02-19 08:43:57.248823 | orchestrator | ++ TEMPEST=false 2025-02-19 08:43:57.248837 | orchestrator | ++ export IS_ZUUL=true 2025-02-19 08:43:57.248851 | orchestrator | ++ IS_ZUUL=true 2025-02-19 08:43:57.248865 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-02-19 08:43:57.248881 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-02-19 08:43:57.248895 | orchestrator | ++ export EXTERNAL_API=false 2025-02-19 08:43:57.248909 | orchestrator | ++ EXTERNAL_API=false 2025-02-19 08:43:57.248923 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-19 08:43:57.248937 | orchestrator | 2025-02-19 08:43:57.248951 | orchestrator | # PULL IMAGES 2025-02-19 08:43:57.248965 | orchestrator | 2025-02-19 08:43:57.248979 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-19 08:43:57.249003 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-19 08:43:57.249017 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-19 08:43:57.249031 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-19 08:43:57.249045 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-19 08:43:57.249059 | orchestrator | + echo 2025-02-19 08:43:57.249073 | orchestrator | + echo '# PULL IMAGES' 2025-02-19 08:43:57.249087 | orchestrator | + echo 2025-02-19 08:43:57.249111 | orchestrator | ++ semver latest 7.0.0 2025-02-19 08:43:57.314432 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-19 08:43:58.780934 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-19 08:43:58.781064 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-02-19 08:43:58.781119 | orchestrator | 2025-02-19 08:43:58 | INFO  | Trying to run play pull-images in environment custom 2025-02-19 08:43:58.839285 | orchestrator | 2025-02-19 08:43:58 | INFO  | Task 04ef23bd-cffd-434c-b620-53588fd6eb92 (pull-images) was prepared for execution. 2025-02-19 08:44:02.224699 | orchestrator | 2025-02-19 08:43:58 | INFO  | It takes a moment until task 04ef23bd-cffd-434c-b620-53588fd6eb92 (pull-images) has been started and output is visible here. 2025-02-19 08:44:02.224813 | orchestrator | 2025-02-19 08:44:02.225500 | orchestrator | PLAY [Pull images] ************************************************************* 2025-02-19 08:44:02.225811 | orchestrator | 2025-02-19 08:44:02.228447 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-02-19 08:44:34.641764 | orchestrator | Wednesday 19 February 2025 08:44:02 +0000 (0:00:00.163) 0:00:00.163 **** 2025-02-19 08:44:34.641929 | orchestrator | changed: [testbed-manager] 2025-02-19 08:44:34.643105 | orchestrator | 2025-02-19 08:44:34.643138 | orchestrator | TASK [Pull other images] ******************************************************* 2025-02-19 08:45:28.594340 | orchestrator | Wednesday 19 February 2025 08:44:34 +0000 (0:00:32.419) 0:00:32.582 **** 2025-02-19 08:45:28.594554 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-02-19 08:45:28.594911 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-02-19 08:45:28.594949 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-02-19 08:45:28.594977 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-02-19 08:45:28.595007 | orchestrator | changed: [testbed-manager] => (item=common) 2025-02-19 08:45:28.595022 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-02-19 08:45:28.595049 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-02-19 08:45:28.597996 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-02-19 08:45:28.598079 | orchestrator | changed: [testbed-manager] => (item=heat) 2025-02-19 08:45:28.598968 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-02-19 08:45:28.599948 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-02-19 08:45:28.600417 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-02-19 08:45:28.600452 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-02-19 08:45:28.600917 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-02-19 08:45:28.602065 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-02-19 08:45:28.603032 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-02-19 08:45:28.607592 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-02-19 08:45:28.608965 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-02-19 08:45:28.609012 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-02-19 08:45:28.609027 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-02-19 08:45:28.609041 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-02-19 08:45:28.609057 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-02-19 08:45:28.609072 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-02-19 08:45:28.609086 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-02-19 08:45:28.609100 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-02-19 08:45:28.609122 | orchestrator | 2025-02-19 08:45:28.609285 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:45:28.609306 | orchestrator | 2025-02-19 08:45:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:45:28.609327 | orchestrator | 2025-02-19 08:45:28 | INFO  | Please wait and do not abort execution. 2025-02-19 08:45:28.609903 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:45:28.610570 | orchestrator | 2025-02-19 08:45:28.611146 | orchestrator | 2025-02-19 08:45:28.611718 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:45:28.612244 | orchestrator | Wednesday 19 February 2025 08:45:28 +0000 (0:00:53.949) 0:01:26.532 **** 2025-02-19 08:45:28.613474 | orchestrator | =============================================================================== 2025-02-19 08:45:28.614182 | orchestrator | Pull other images ------------------------------------------------------ 53.95s 2025-02-19 08:45:28.614224 | orchestrator | Pull keystone image ---------------------------------------------------- 32.42s 2025-02-19 08:45:30.909108 | orchestrator | 2025-02-19 08:45:30 | INFO  | Trying to run play wipe-partitions in environment custom 2025-02-19 08:45:30.960400 | orchestrator | 2025-02-19 08:45:30 | INFO  | Task e291525e-5a66-4824-ba67-bad80a600e76 (wipe-partitions) was prepared for execution. 2025-02-19 08:45:34.782398 | orchestrator | 2025-02-19 08:45:30 | INFO  | It takes a moment until task e291525e-5a66-4824-ba67-bad80a600e76 (wipe-partitions) has been started and output is visible here. 2025-02-19 08:45:34.782540 | orchestrator | 2025-02-19 08:45:34.783396 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-02-19 08:45:34.785455 | orchestrator | 2025-02-19 08:45:34.788721 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-02-19 08:45:34.789766 | orchestrator | Wednesday 19 February 2025 08:45:34 +0000 (0:00:00.147) 0:00:00.147 **** 2025-02-19 08:45:35.425596 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:45:35.427020 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:45:35.427794 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:45:35.428812 | orchestrator | 2025-02-19 08:45:35.429705 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-02-19 08:45:35.431103 | orchestrator | Wednesday 19 February 2025 08:45:35 +0000 (0:00:00.645) 0:00:00.792 **** 2025-02-19 08:45:35.592968 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:45:35.697755 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:45:35.698553 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:45:35.698576 | orchestrator | 2025-02-19 08:45:35.699202 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-02-19 08:45:35.702658 | orchestrator | Wednesday 19 February 2025 08:45:35 +0000 (0:00:00.271) 0:00:01.064 **** 2025-02-19 08:45:36.506700 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:45:36.508315 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:45:36.508971 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:45:36.510094 | orchestrator | 2025-02-19 08:45:36.510571 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-02-19 08:45:36.511205 | orchestrator | Wednesday 19 February 2025 08:45:36 +0000 (0:00:00.808) 0:00:01.873 **** 2025-02-19 08:45:36.687891 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:45:36.792788 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:45:36.793035 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:45:36.793157 | orchestrator | 2025-02-19 08:45:36.793241 | orchestrator | TASK [Check device availability] *********************************************** 2025-02-19 08:45:36.793338 | orchestrator | Wednesday 19 February 2025 08:45:36 +0000 (0:00:00.284) 0:00:02.157 **** 2025-02-19 08:45:38.079106 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-02-19 08:45:38.079292 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-02-19 08:45:38.080261 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-02-19 08:45:38.081593 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-02-19 08:45:38.084544 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-02-19 08:45:38.085125 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-02-19 08:45:38.085164 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-02-19 08:45:38.086217 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-02-19 08:45:38.087115 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-02-19 08:45:38.088705 | orchestrator | 2025-02-19 08:45:38.089507 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-02-19 08:45:38.090928 | orchestrator | Wednesday 19 February 2025 08:45:38 +0000 (0:00:01.289) 0:00:03.447 **** 2025-02-19 08:45:39.478341 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-02-19 08:45:39.480343 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-02-19 08:45:39.483071 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-02-19 08:45:39.485703 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-02-19 08:45:39.485785 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-02-19 08:45:39.485807 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-02-19 08:45:39.488195 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-02-19 08:45:39.490462 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-02-19 08:45:39.491897 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-02-19 08:45:39.493894 | orchestrator | 2025-02-19 08:45:39.499797 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-02-19 08:45:39.500899 | orchestrator | Wednesday 19 February 2025 08:45:39 +0000 (0:00:01.397) 0:00:04.844 **** 2025-02-19 08:45:41.948823 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-02-19 08:45:41.949558 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-02-19 08:45:41.950292 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-02-19 08:45:41.951220 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-02-19 08:45:41.953251 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-02-19 08:45:41.955293 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-02-19 08:45:41.958570 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-02-19 08:45:41.961337 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-02-19 08:45:41.962488 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-02-19 08:45:41.963591 | orchestrator | 2025-02-19 08:45:41.965035 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-02-19 08:45:41.965702 | orchestrator | Wednesday 19 February 2025 08:45:41 +0000 (0:00:02.467) 0:00:07.312 **** 2025-02-19 08:45:42.574306 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:45:42.575138 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:45:42.576334 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:45:42.579593 | orchestrator | 2025-02-19 08:45:42.581377 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-02-19 08:45:42.581484 | orchestrator | Wednesday 19 February 2025 08:45:42 +0000 (0:00:00.630) 0:00:07.942 **** 2025-02-19 08:45:43.284230 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:45:43.287169 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:45:43.290181 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:45:43.291349 | orchestrator | 2025-02-19 08:45:43.293276 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:45:43.294804 | orchestrator | 2025-02-19 08:45:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:45:43.297132 | orchestrator | 2025-02-19 08:45:43 | INFO  | Please wait and do not abort execution. 2025-02-19 08:45:43.297181 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:45:43.299792 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:45:43.300859 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:45:43.301984 | orchestrator | 2025-02-19 08:45:43.302974 | orchestrator | 2025-02-19 08:45:43.304166 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:45:43.305581 | orchestrator | Wednesday 19 February 2025 08:45:43 +0000 (0:00:00.706) 0:00:08.649 **** 2025-02-19 08:45:43.306231 | orchestrator | =============================================================================== 2025-02-19 08:45:43.307313 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.47s 2025-02-19 08:45:43.307897 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.40s 2025-02-19 08:45:43.308862 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2025-02-19 08:45:43.309433 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.81s 2025-02-19 08:45:43.310266 | orchestrator | Request device events from the kernel ----------------------------------- 0.71s 2025-02-19 08:45:43.310894 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.65s 2025-02-19 08:45:43.311500 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-02-19 08:45:43.312205 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-02-19 08:45:43.312608 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-02-19 08:45:45.588055 | orchestrator | 2025-02-19 08:45:45 | INFO  | Task 0264d82a-9020-4b8b-b9fd-eead4e20f523 (facts) was prepared for execution. 2025-02-19 08:45:49.140480 | orchestrator | 2025-02-19 08:45:45 | INFO  | It takes a moment until task 0264d82a-9020-4b8b-b9fd-eead4e20f523 (facts) has been started and output is visible here. 2025-02-19 08:45:49.140680 | orchestrator | 2025-02-19 08:45:49.143235 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-02-19 08:45:49.143427 | orchestrator | 2025-02-19 08:45:49.143876 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-19 08:45:49.144592 | orchestrator | Wednesday 19 February 2025 08:45:49 +0000 (0:00:00.216) 0:00:00.216 **** 2025-02-19 08:45:50.501171 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:45:50.504218 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:45:50.504456 | orchestrator | ok: [testbed-manager] 2025-02-19 08:45:50.504490 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:45:50.504506 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:45:50.504529 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:45:50.504866 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:45:50.505093 | orchestrator | 2025-02-19 08:45:50.508545 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-19 08:45:50.733837 | orchestrator | Wednesday 19 February 2025 08:45:50 +0000 (0:00:01.357) 0:00:01.574 **** 2025-02-19 08:45:50.733966 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:45:50.822223 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:45:50.904076 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:45:50.987514 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:45:51.079920 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:45:51.916288 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:45:51.916501 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:45:51.916532 | orchestrator | 2025-02-19 08:45:51.916863 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-19 08:45:51.917073 | orchestrator | 2025-02-19 08:45:51.917335 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-19 08:45:51.917779 | orchestrator | Wednesday 19 February 2025 08:45:51 +0000 (0:00:01.420) 0:00:02.994 **** 2025-02-19 08:45:56.808992 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:45:56.809812 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:45:56.809840 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:45:56.812142 | orchestrator | ok: [testbed-manager] 2025-02-19 08:45:56.816394 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:45:56.816440 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:45:56.817330 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:45:56.818637 | orchestrator | 2025-02-19 08:45:56.818713 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-19 08:45:56.819422 | orchestrator | 2025-02-19 08:45:56.819482 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-19 08:45:56.819964 | orchestrator | Wednesday 19 February 2025 08:45:56 +0000 (0:00:04.893) 0:00:07.887 **** 2025-02-19 08:45:57.287174 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:45:57.397194 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:45:57.510925 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:45:57.709015 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:45:57.907262 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:45:57.964584 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:45:57.966904 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:45:57.967082 | orchestrator | 2025-02-19 08:45:57.967107 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:45:57.967121 | orchestrator | 2025-02-19 08:45:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:45:57.967134 | orchestrator | 2025-02-19 08:45:57 | INFO  | Please wait and do not abort execution. 2025-02-19 08:45:57.967153 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:45:57.967803 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:45:57.968667 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:45:57.969318 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:45:57.969697 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:45:57.969782 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:45:57.970427 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:45:57.970922 | orchestrator | 2025-02-19 08:45:57.971696 | orchestrator | 2025-02-19 08:45:57.971726 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:45:57.971829 | orchestrator | Wednesday 19 February 2025 08:45:57 +0000 (0:00:01.156) 0:00:09.044 **** 2025-02-19 08:45:57.971847 | orchestrator | =============================================================================== 2025-02-19 08:45:57.971864 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.89s 2025-02-19 08:45:57.972181 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.42s 2025-02-19 08:45:57.972350 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.36s 2025-02-19 08:45:57.972736 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.16s 2025-02-19 08:46:00.723221 | orchestrator | 2025-02-19 08:46:00 | INFO  | Task 4edb5f86-4a90-48dd-9621-c551d862cc61 (ceph-configure-lvm-volumes) was prepared for execution. 2025-02-19 08:46:05.087469 | orchestrator | 2025-02-19 08:46:00 | INFO  | It takes a moment until task 4edb5f86-4a90-48dd-9621-c551d862cc61 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-02-19 08:46:05.087683 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-19 08:46:05.758754 | orchestrator | 2025-02-19 08:46:05.760154 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-19 08:46:05.760695 | orchestrator | 2025-02-19 08:46:05.760931 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-19 08:46:05.761770 | orchestrator | Wednesday 19 February 2025 08:46:05 +0000 (0:00:00.560) 0:00:00.560 **** 2025-02-19 08:46:06.048015 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-19 08:46:06.048727 | orchestrator | 2025-02-19 08:46:06.050133 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-19 08:46:06.051110 | orchestrator | Wednesday 19 February 2025 08:46:06 +0000 (0:00:00.294) 0:00:00.854 **** 2025-02-19 08:46:06.371147 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:46:06.374326 | orchestrator | 2025-02-19 08:46:06.375556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:06.377006 | orchestrator | Wednesday 19 February 2025 08:46:06 +0000 (0:00:00.320) 0:00:01.175 **** 2025-02-19 08:46:07.162414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-02-19 08:46:07.164784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-02-19 08:46:07.165163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-02-19 08:46:07.165613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-02-19 08:46:07.167170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-02-19 08:46:07.167523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-02-19 08:46:07.169233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-02-19 08:46:07.169708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-02-19 08:46:07.170457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-02-19 08:46:07.171134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-02-19 08:46:07.171871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-02-19 08:46:07.172382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-02-19 08:46:07.172979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-02-19 08:46:07.173234 | orchestrator | 2025-02-19 08:46:07.173864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:07.174468 | orchestrator | Wednesday 19 February 2025 08:46:07 +0000 (0:00:00.794) 0:00:01.970 **** 2025-02-19 08:46:07.400115 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:07.401071 | orchestrator | 2025-02-19 08:46:07.401988 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:07.403396 | orchestrator | Wednesday 19 February 2025 08:46:07 +0000 (0:00:00.238) 0:00:02.209 **** 2025-02-19 08:46:07.630885 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:07.631654 | orchestrator | 2025-02-19 08:46:07.635846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:07.636914 | orchestrator | Wednesday 19 February 2025 08:46:07 +0000 (0:00:00.230) 0:00:02.439 **** 2025-02-19 08:46:07.883766 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:07.885082 | orchestrator | 2025-02-19 08:46:07.885546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:07.885581 | orchestrator | Wednesday 19 February 2025 08:46:07 +0000 (0:00:00.252) 0:00:02.692 **** 2025-02-19 08:46:08.094180 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:08.094431 | orchestrator | 2025-02-19 08:46:08.094477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:08.094803 | orchestrator | Wednesday 19 February 2025 08:46:08 +0000 (0:00:00.209) 0:00:02.902 **** 2025-02-19 08:46:08.325301 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:08.327348 | orchestrator | 2025-02-19 08:46:08.327955 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:08.329026 | orchestrator | Wednesday 19 February 2025 08:46:08 +0000 (0:00:00.231) 0:00:03.133 **** 2025-02-19 08:46:08.522768 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:08.523556 | orchestrator | 2025-02-19 08:46:08.524871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:08.525209 | orchestrator | Wednesday 19 February 2025 08:46:08 +0000 (0:00:00.197) 0:00:03.330 **** 2025-02-19 08:46:08.711095 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:08.711379 | orchestrator | 2025-02-19 08:46:08.711415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:08.711473 | orchestrator | Wednesday 19 February 2025 08:46:08 +0000 (0:00:00.187) 0:00:03.518 **** 2025-02-19 08:46:08.914409 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:08.916186 | orchestrator | 2025-02-19 08:46:08.916232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:09.512525 | orchestrator | Wednesday 19 February 2025 08:46:08 +0000 (0:00:00.207) 0:00:03.725 **** 2025-02-19 08:46:09.512760 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283) 2025-02-19 08:46:09.514560 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283) 2025-02-19 08:46:09.514592 | orchestrator | 2025-02-19 08:46:09.514607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:09.514629 | orchestrator | Wednesday 19 February 2025 08:46:09 +0000 (0:00:00.595) 0:00:04.321 **** 2025-02-19 08:46:10.265241 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0f115ae7-332f-47b5-bfba-4efd1297123a) 2025-02-19 08:46:10.265803 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0f115ae7-332f-47b5-bfba-4efd1297123a) 2025-02-19 08:46:10.266821 | orchestrator | 2025-02-19 08:46:10.267883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:10.271888 | orchestrator | Wednesday 19 February 2025 08:46:10 +0000 (0:00:00.752) 0:00:05.073 **** 2025-02-19 08:46:10.700890 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7ac42676-4a1f-422d-9e47-87a492d5a795) 2025-02-19 08:46:10.701153 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7ac42676-4a1f-422d-9e47-87a492d5a795) 2025-02-19 08:46:10.701200 | orchestrator | 2025-02-19 08:46:10.702187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:10.702502 | orchestrator | Wednesday 19 February 2025 08:46:10 +0000 (0:00:00.437) 0:00:05.511 **** 2025-02-19 08:46:11.130274 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b50482d4-467d-4151-94c3-bb810c8ecc19) 2025-02-19 08:46:11.531935 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b50482d4-467d-4151-94c3-bb810c8ecc19) 2025-02-19 08:46:11.532058 | orchestrator | 2025-02-19 08:46:11.532078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:11.532093 | orchestrator | Wednesday 19 February 2025 08:46:11 +0000 (0:00:00.422) 0:00:05.933 **** 2025-02-19 08:46:11.532124 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-19 08:46:11.532258 | orchestrator | 2025-02-19 08:46:11.533230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:11.533265 | orchestrator | Wednesday 19 February 2025 08:46:11 +0000 (0:00:00.405) 0:00:06.338 **** 2025-02-19 08:46:12.110388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-02-19 08:46:12.110866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-02-19 08:46:12.110895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-02-19 08:46:12.111282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-02-19 08:46:12.111735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-02-19 08:46:12.112172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-02-19 08:46:12.114158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-02-19 08:46:12.115619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-02-19 08:46:12.117411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-02-19 08:46:12.122365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-02-19 08:46:12.123114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-02-19 08:46:12.123132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-02-19 08:46:12.125030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-02-19 08:46:12.127309 | orchestrator | 2025-02-19 08:46:12.129346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:12.129777 | orchestrator | Wednesday 19 February 2025 08:46:12 +0000 (0:00:00.575) 0:00:06.914 **** 2025-02-19 08:46:12.419948 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:12.632293 | orchestrator | 2025-02-19 08:46:12.632433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:12.632464 | orchestrator | Wednesday 19 February 2025 08:46:12 +0000 (0:00:00.312) 0:00:07.226 **** 2025-02-19 08:46:12.632509 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:12.633896 | orchestrator | 2025-02-19 08:46:12.880988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:12.881125 | orchestrator | Wednesday 19 February 2025 08:46:12 +0000 (0:00:00.213) 0:00:07.439 **** 2025-02-19 08:46:12.881166 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:12.883564 | orchestrator | 2025-02-19 08:46:12.883950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:12.884165 | orchestrator | Wednesday 19 February 2025 08:46:12 +0000 (0:00:00.251) 0:00:07.691 **** 2025-02-19 08:46:13.092962 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:13.093461 | orchestrator | 2025-02-19 08:46:13.093566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:13.096585 | orchestrator | Wednesday 19 February 2025 08:46:13 +0000 (0:00:00.210) 0:00:07.902 **** 2025-02-19 08:46:13.837262 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:13.838800 | orchestrator | 2025-02-19 08:46:13.839015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:13.839455 | orchestrator | Wednesday 19 February 2025 08:46:13 +0000 (0:00:00.744) 0:00:08.646 **** 2025-02-19 08:46:14.114168 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:14.114313 | orchestrator | 2025-02-19 08:46:14.114517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:14.115190 | orchestrator | Wednesday 19 February 2025 08:46:14 +0000 (0:00:00.275) 0:00:08.922 **** 2025-02-19 08:46:14.487788 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:14.493663 | orchestrator | 2025-02-19 08:46:14.756828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:14.756966 | orchestrator | Wednesday 19 February 2025 08:46:14 +0000 (0:00:00.371) 0:00:09.293 **** 2025-02-19 08:46:14.757017 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:14.757185 | orchestrator | 2025-02-19 08:46:14.757226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:14.760472 | orchestrator | Wednesday 19 February 2025 08:46:14 +0000 (0:00:00.270) 0:00:09.564 **** 2025-02-19 08:46:15.576542 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-02-19 08:46:15.577851 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-02-19 08:46:15.578010 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-02-19 08:46:15.578090 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-02-19 08:46:15.578357 | orchestrator | 2025-02-19 08:46:15.578671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:15.578901 | orchestrator | Wednesday 19 February 2025 08:46:15 +0000 (0:00:00.821) 0:00:10.386 **** 2025-02-19 08:46:15.844747 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:15.845308 | orchestrator | 2025-02-19 08:46:15.845882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:15.846273 | orchestrator | Wednesday 19 February 2025 08:46:15 +0000 (0:00:00.266) 0:00:10.652 **** 2025-02-19 08:46:16.164915 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:16.386770 | orchestrator | 2025-02-19 08:46:16.386890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:16.386913 | orchestrator | Wednesday 19 February 2025 08:46:16 +0000 (0:00:00.318) 0:00:10.971 **** 2025-02-19 08:46:16.386947 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:16.387310 | orchestrator | 2025-02-19 08:46:16.388390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:16.389408 | orchestrator | Wednesday 19 February 2025 08:46:16 +0000 (0:00:00.221) 0:00:11.192 **** 2025-02-19 08:46:16.627734 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:16.627931 | orchestrator | 2025-02-19 08:46:16.628301 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-19 08:46:16.631469 | orchestrator | Wednesday 19 February 2025 08:46:16 +0000 (0:00:00.242) 0:00:11.435 **** 2025-02-19 08:46:16.880090 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-02-19 08:46:16.880475 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-02-19 08:46:16.883858 | orchestrator | 2025-02-19 08:46:16.884272 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-19 08:46:16.884302 | orchestrator | Wednesday 19 February 2025 08:46:16 +0000 (0:00:00.253) 0:00:11.689 **** 2025-02-19 08:46:17.043087 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:17.047109 | orchestrator | 2025-02-19 08:46:17.440064 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-19 08:46:17.440201 | orchestrator | Wednesday 19 February 2025 08:46:17 +0000 (0:00:00.160) 0:00:11.849 **** 2025-02-19 08:46:17.440237 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:17.441266 | orchestrator | 2025-02-19 08:46:17.448023 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-19 08:46:17.681344 | orchestrator | Wednesday 19 February 2025 08:46:17 +0000 (0:00:00.399) 0:00:12.248 **** 2025-02-19 08:46:17.681465 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:17.682536 | orchestrator | 2025-02-19 08:46:17.683407 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-19 08:46:17.688412 | orchestrator | Wednesday 19 February 2025 08:46:17 +0000 (0:00:00.239) 0:00:12.488 **** 2025-02-19 08:46:17.883042 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:46:17.888840 | orchestrator | 2025-02-19 08:46:17.893124 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-19 08:46:17.893503 | orchestrator | Wednesday 19 February 2025 08:46:17 +0000 (0:00:00.196) 0:00:12.685 **** 2025-02-19 08:46:18.179520 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ffe4904-1899-5051-bec6-9b9e5f20cdb9'}}) 2025-02-19 08:46:18.179721 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bbf6aa6c-a724-5ce6-b507-3cef42d33bac'}}) 2025-02-19 08:46:18.179784 | orchestrator | 2025-02-19 08:46:18.179841 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-19 08:46:18.182335 | orchestrator | Wednesday 19 February 2025 08:46:18 +0000 (0:00:00.303) 0:00:12.989 **** 2025-02-19 08:46:18.434345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ffe4904-1899-5051-bec6-9b9e5f20cdb9'}})  2025-02-19 08:46:18.434505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bbf6aa6c-a724-5ce6-b507-3cef42d33bac'}})  2025-02-19 08:46:18.437951 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:18.438186 | orchestrator | 2025-02-19 08:46:18.438557 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-19 08:46:18.438899 | orchestrator | Wednesday 19 February 2025 08:46:18 +0000 (0:00:00.249) 0:00:13.239 **** 2025-02-19 08:46:18.747350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ffe4904-1899-5051-bec6-9b9e5f20cdb9'}})  2025-02-19 08:46:18.748636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bbf6aa6c-a724-5ce6-b507-3cef42d33bac'}})  2025-02-19 08:46:18.749473 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:18.752867 | orchestrator | 2025-02-19 08:46:18.753319 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-19 08:46:18.754107 | orchestrator | Wednesday 19 February 2025 08:46:18 +0000 (0:00:00.317) 0:00:13.557 **** 2025-02-19 08:46:19.057182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ffe4904-1899-5051-bec6-9b9e5f20cdb9'}})  2025-02-19 08:46:19.057881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bbf6aa6c-a724-5ce6-b507-3cef42d33bac'}})  2025-02-19 08:46:19.058804 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:19.059287 | orchestrator | 2025-02-19 08:46:19.060024 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-19 08:46:19.060922 | orchestrator | Wednesday 19 February 2025 08:46:19 +0000 (0:00:00.308) 0:00:13.865 **** 2025-02-19 08:46:19.206242 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:46:19.206396 | orchestrator | 2025-02-19 08:46:19.206857 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-19 08:46:19.207109 | orchestrator | Wednesday 19 February 2025 08:46:19 +0000 (0:00:00.147) 0:00:14.012 **** 2025-02-19 08:46:19.397906 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:46:19.398186 | orchestrator | 2025-02-19 08:46:19.398503 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-19 08:46:19.399121 | orchestrator | Wednesday 19 February 2025 08:46:19 +0000 (0:00:00.193) 0:00:14.205 **** 2025-02-19 08:46:19.683994 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:19.684619 | orchestrator | 2025-02-19 08:46:19.686086 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-19 08:46:19.688876 | orchestrator | Wednesday 19 February 2025 08:46:19 +0000 (0:00:00.287) 0:00:14.493 **** 2025-02-19 08:46:19.885491 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:19.889330 | orchestrator | 2025-02-19 08:46:19.889383 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-19 08:46:20.109703 | orchestrator | Wednesday 19 February 2025 08:46:19 +0000 (0:00:00.199) 0:00:14.692 **** 2025-02-19 08:46:20.109874 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:20.110109 | orchestrator | 2025-02-19 08:46:20.116277 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-19 08:46:20.519971 | orchestrator | Wednesday 19 February 2025 08:46:20 +0000 (0:00:00.226) 0:00:14.919 **** 2025-02-19 08:46:20.520160 | orchestrator | ok: [testbed-node-3] => { 2025-02-19 08:46:20.520239 | orchestrator |  "ceph_osd_devices": { 2025-02-19 08:46:20.520263 | orchestrator |  "sdb": { 2025-02-19 08:46:20.520783 | orchestrator |  "osd_lvm_uuid": "3ffe4904-1899-5051-bec6-9b9e5f20cdb9" 2025-02-19 08:46:20.520965 | orchestrator |  }, 2025-02-19 08:46:20.522686 | orchestrator |  "sdc": { 2025-02-19 08:46:20.524564 | orchestrator |  "osd_lvm_uuid": "bbf6aa6c-a724-5ce6-b507-3cef42d33bac" 2025-02-19 08:46:20.524959 | orchestrator |  } 2025-02-19 08:46:20.525545 | orchestrator |  } 2025-02-19 08:46:20.526073 | orchestrator | } 2025-02-19 08:46:20.526953 | orchestrator | 2025-02-19 08:46:20.528565 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-19 08:46:20.529275 | orchestrator | Wednesday 19 February 2025 08:46:20 +0000 (0:00:00.410) 0:00:15.330 **** 2025-02-19 08:46:20.764571 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:20.765366 | orchestrator | 2025-02-19 08:46:20.974368 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-19 08:46:20.974493 | orchestrator | Wednesday 19 February 2025 08:46:20 +0000 (0:00:00.241) 0:00:15.572 **** 2025-02-19 08:46:20.974530 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:20.977814 | orchestrator | 2025-02-19 08:46:20.979339 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-19 08:46:20.979422 | orchestrator | Wednesday 19 February 2025 08:46:20 +0000 (0:00:00.212) 0:00:15.784 **** 2025-02-19 08:46:21.140574 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:46:21.140743 | orchestrator | 2025-02-19 08:46:21.141035 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-19 08:46:21.141744 | orchestrator | Wednesday 19 February 2025 08:46:21 +0000 (0:00:00.165) 0:00:15.949 **** 2025-02-19 08:46:21.450574 | orchestrator | changed: [testbed-node-3] => { 2025-02-19 08:46:21.450768 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-19 08:46:21.450798 | orchestrator |  "ceph_osd_devices": { 2025-02-19 08:46:21.451928 | orchestrator |  "sdb": { 2025-02-19 08:46:21.452231 | orchestrator |  "osd_lvm_uuid": "3ffe4904-1899-5051-bec6-9b9e5f20cdb9" 2025-02-19 08:46:21.452302 | orchestrator |  }, 2025-02-19 08:46:21.452608 | orchestrator |  "sdc": { 2025-02-19 08:46:21.452739 | orchestrator |  "osd_lvm_uuid": "bbf6aa6c-a724-5ce6-b507-3cef42d33bac" 2025-02-19 08:46:21.453030 | orchestrator |  } 2025-02-19 08:46:21.453246 | orchestrator |  }, 2025-02-19 08:46:21.454756 | orchestrator |  "lvm_volumes": [ 2025-02-19 08:46:21.454871 | orchestrator |  { 2025-02-19 08:46:21.454966 | orchestrator |  "data": "osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9", 2025-02-19 08:46:21.456154 | orchestrator |  "data_vg": "ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9" 2025-02-19 08:46:21.457455 | orchestrator |  }, 2025-02-19 08:46:21.457932 | orchestrator |  { 2025-02-19 08:46:21.457973 | orchestrator |  "data": "osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac", 2025-02-19 08:46:21.459901 | orchestrator |  "data_vg": "ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac" 2025-02-19 08:46:21.459943 | orchestrator |  } 2025-02-19 08:46:21.460029 | orchestrator |  ] 2025-02-19 08:46:21.460052 | orchestrator |  } 2025-02-19 08:46:21.460327 | orchestrator | } 2025-02-19 08:46:21.460672 | orchestrator | 2025-02-19 08:46:21.460880 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-19 08:46:21.461417 | orchestrator | Wednesday 19 February 2025 08:46:21 +0000 (0:00:00.310) 0:00:16.260 **** 2025-02-19 08:46:24.347291 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-19 08:46:24.348849 | orchestrator | 2025-02-19 08:46:24.349461 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-19 08:46:24.349631 | orchestrator | 2025-02-19 08:46:24.351707 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-19 08:46:24.351980 | orchestrator | Wednesday 19 February 2025 08:46:24 +0000 (0:00:02.896) 0:00:19.157 **** 2025-02-19 08:46:24.649394 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-19 08:46:24.650525 | orchestrator | 2025-02-19 08:46:24.650839 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-19 08:46:24.653973 | orchestrator | Wednesday 19 February 2025 08:46:24 +0000 (0:00:00.301) 0:00:19.458 **** 2025-02-19 08:46:24.937043 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:46:24.938639 | orchestrator | 2025-02-19 08:46:24.938795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:24.940750 | orchestrator | Wednesday 19 February 2025 08:46:24 +0000 (0:00:00.285) 0:00:19.744 **** 2025-02-19 08:46:25.452769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-02-19 08:46:25.455181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-02-19 08:46:25.455390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-02-19 08:46:25.455907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-02-19 08:46:25.456407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-02-19 08:46:25.457293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-02-19 08:46:25.457724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-02-19 08:46:25.458692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-02-19 08:46:25.458903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-02-19 08:46:25.459309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-02-19 08:46:25.460273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-02-19 08:46:25.460530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-02-19 08:46:25.461110 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-02-19 08:46:25.461407 | orchestrator | 2025-02-19 08:46:25.462560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:25.462977 | orchestrator | Wednesday 19 February 2025 08:46:25 +0000 (0:00:00.514) 0:00:20.259 **** 2025-02-19 08:46:25.698190 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:25.700802 | orchestrator | 2025-02-19 08:46:25.701240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:25.701263 | orchestrator | Wednesday 19 February 2025 08:46:25 +0000 (0:00:00.247) 0:00:20.506 **** 2025-02-19 08:46:25.945768 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:25.946988 | orchestrator | 2025-02-19 08:46:25.948614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:25.948828 | orchestrator | Wednesday 19 February 2025 08:46:25 +0000 (0:00:00.247) 0:00:20.753 **** 2025-02-19 08:46:26.261010 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:26.261171 | orchestrator | 2025-02-19 08:46:26.261204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:26.262699 | orchestrator | Wednesday 19 February 2025 08:46:26 +0000 (0:00:00.314) 0:00:21.068 **** 2025-02-19 08:46:26.482096 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:26.484355 | orchestrator | 2025-02-19 08:46:26.484400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:27.212080 | orchestrator | Wednesday 19 February 2025 08:46:26 +0000 (0:00:00.221) 0:00:21.290 **** 2025-02-19 08:46:27.212219 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:27.213473 | orchestrator | 2025-02-19 08:46:27.213800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:27.214146 | orchestrator | Wednesday 19 February 2025 08:46:27 +0000 (0:00:00.731) 0:00:22.022 **** 2025-02-19 08:46:27.485369 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:27.485605 | orchestrator | 2025-02-19 08:46:27.485703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:27.485741 | orchestrator | Wednesday 19 February 2025 08:46:27 +0000 (0:00:00.268) 0:00:22.291 **** 2025-02-19 08:46:27.693604 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:27.693815 | orchestrator | 2025-02-19 08:46:27.693851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:27.917877 | orchestrator | Wednesday 19 February 2025 08:46:27 +0000 (0:00:00.208) 0:00:22.499 **** 2025-02-19 08:46:27.918097 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:27.920051 | orchestrator | 2025-02-19 08:46:27.920092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:27.920360 | orchestrator | Wednesday 19 February 2025 08:46:27 +0000 (0:00:00.228) 0:00:22.728 **** 2025-02-19 08:46:28.473485 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6) 2025-02-19 08:46:28.473688 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6) 2025-02-19 08:46:28.473720 | orchestrator | 2025-02-19 08:46:28.473808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:28.474122 | orchestrator | Wednesday 19 February 2025 08:46:28 +0000 (0:00:00.551) 0:00:23.279 **** 2025-02-19 08:46:29.029054 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_923f2b44-0879-4277-a106-844be4b2565d) 2025-02-19 08:46:29.032013 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_923f2b44-0879-4277-a106-844be4b2565d) 2025-02-19 08:46:29.032121 | orchestrator | 2025-02-19 08:46:29.032519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:29.032734 | orchestrator | Wednesday 19 February 2025 08:46:29 +0000 (0:00:00.558) 0:00:23.837 **** 2025-02-19 08:46:29.598918 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0c5208c8-9aa1-4e87-9cdb-910770e18a0c) 2025-02-19 08:46:29.599148 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0c5208c8-9aa1-4e87-9cdb-910770e18a0c) 2025-02-19 08:46:29.605087 | orchestrator | 2025-02-19 08:46:29.608283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:29.609265 | orchestrator | Wednesday 19 February 2025 08:46:29 +0000 (0:00:00.564) 0:00:24.402 **** 2025-02-19 08:46:30.203043 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_69806146-708c-4195-b6c7-ec061db9d03d) 2025-02-19 08:46:30.203215 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_69806146-708c-4195-b6c7-ec061db9d03d) 2025-02-19 08:46:30.203237 | orchestrator | 2025-02-19 08:46:30.203546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:30.204515 | orchestrator | Wednesday 19 February 2025 08:46:30 +0000 (0:00:00.608) 0:00:25.011 **** 2025-02-19 08:46:30.721309 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-19 08:46:30.721422 | orchestrator | 2025-02-19 08:46:30.724672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:30.724792 | orchestrator | Wednesday 19 February 2025 08:46:30 +0000 (0:00:00.516) 0:00:25.528 **** 2025-02-19 08:46:31.532905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-02-19 08:46:31.533143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-02-19 08:46:31.534339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-02-19 08:46:31.535507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-02-19 08:46:31.536025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-02-19 08:46:31.536733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-02-19 08:46:31.537133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-02-19 08:46:31.537690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-02-19 08:46:31.538240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-02-19 08:46:31.538541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-02-19 08:46:31.539025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-02-19 08:46:31.539503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-02-19 08:46:31.540086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-02-19 08:46:31.541006 | orchestrator | 2025-02-19 08:46:31.541546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:31.542122 | orchestrator | Wednesday 19 February 2025 08:46:31 +0000 (0:00:00.809) 0:00:26.337 **** 2025-02-19 08:46:31.844065 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:31.845407 | orchestrator | 2025-02-19 08:46:31.845456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:31.845485 | orchestrator | Wednesday 19 February 2025 08:46:31 +0000 (0:00:00.315) 0:00:26.653 **** 2025-02-19 08:46:32.110162 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:32.112378 | orchestrator | 2025-02-19 08:46:32.112770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:32.113492 | orchestrator | Wednesday 19 February 2025 08:46:32 +0000 (0:00:00.267) 0:00:26.920 **** 2025-02-19 08:46:32.369791 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:32.370882 | orchestrator | 2025-02-19 08:46:32.370969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:32.371172 | orchestrator | Wednesday 19 February 2025 08:46:32 +0000 (0:00:00.259) 0:00:27.179 **** 2025-02-19 08:46:32.672061 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:32.676079 | orchestrator | 2025-02-19 08:46:32.677823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:32.679342 | orchestrator | Wednesday 19 February 2025 08:46:32 +0000 (0:00:00.300) 0:00:27.480 **** 2025-02-19 08:46:32.963961 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:32.965262 | orchestrator | 2025-02-19 08:46:32.965327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:32.966768 | orchestrator | Wednesday 19 February 2025 08:46:32 +0000 (0:00:00.289) 0:00:27.769 **** 2025-02-19 08:46:33.265742 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:33.265845 | orchestrator | 2025-02-19 08:46:33.269448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:33.272692 | orchestrator | Wednesday 19 February 2025 08:46:33 +0000 (0:00:00.306) 0:00:28.076 **** 2025-02-19 08:46:33.541205 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:33.544259 | orchestrator | 2025-02-19 08:46:33.548191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:33.550164 | orchestrator | Wednesday 19 February 2025 08:46:33 +0000 (0:00:00.273) 0:00:28.349 **** 2025-02-19 08:46:33.841050 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:33.841224 | orchestrator | 2025-02-19 08:46:33.845274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:33.845430 | orchestrator | Wednesday 19 February 2025 08:46:33 +0000 (0:00:00.297) 0:00:28.647 **** 2025-02-19 08:46:35.012691 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-02-19 08:46:35.013168 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-02-19 08:46:35.014419 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-02-19 08:46:35.016538 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-02-19 08:46:35.017244 | orchestrator | 2025-02-19 08:46:35.018061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:35.018826 | orchestrator | Wednesday 19 February 2025 08:46:35 +0000 (0:00:01.174) 0:00:29.822 **** 2025-02-19 08:46:35.776465 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:35.777939 | orchestrator | 2025-02-19 08:46:35.779264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:35.779949 | orchestrator | Wednesday 19 February 2025 08:46:35 +0000 (0:00:00.761) 0:00:30.583 **** 2025-02-19 08:46:36.045799 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:36.048823 | orchestrator | 2025-02-19 08:46:36.050554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:36.052883 | orchestrator | Wednesday 19 February 2025 08:46:36 +0000 (0:00:00.271) 0:00:30.854 **** 2025-02-19 08:46:36.309880 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:36.311676 | orchestrator | 2025-02-19 08:46:36.312808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:36.313968 | orchestrator | Wednesday 19 February 2025 08:46:36 +0000 (0:00:00.263) 0:00:31.118 **** 2025-02-19 08:46:36.517195 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:36.518227 | orchestrator | 2025-02-19 08:46:36.518609 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-19 08:46:36.519125 | orchestrator | Wednesday 19 February 2025 08:46:36 +0000 (0:00:00.208) 0:00:31.326 **** 2025-02-19 08:46:36.698003 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-02-19 08:46:36.698472 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-02-19 08:46:36.700104 | orchestrator | 2025-02-19 08:46:36.700733 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-19 08:46:36.701903 | orchestrator | Wednesday 19 February 2025 08:46:36 +0000 (0:00:00.179) 0:00:31.506 **** 2025-02-19 08:46:36.837520 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:36.837811 | orchestrator | 2025-02-19 08:46:36.838992 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-19 08:46:36.839494 | orchestrator | Wednesday 19 February 2025 08:46:36 +0000 (0:00:00.139) 0:00:31.645 **** 2025-02-19 08:46:36.985120 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:36.985337 | orchestrator | 2025-02-19 08:46:36.985944 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-19 08:46:36.986528 | orchestrator | Wednesday 19 February 2025 08:46:36 +0000 (0:00:00.148) 0:00:31.794 **** 2025-02-19 08:46:37.127949 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:37.129012 | orchestrator | 2025-02-19 08:46:37.132734 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-19 08:46:37.135584 | orchestrator | Wednesday 19 February 2025 08:46:37 +0000 (0:00:00.142) 0:00:31.937 **** 2025-02-19 08:46:37.262485 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:46:37.264585 | orchestrator | 2025-02-19 08:46:37.265347 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-19 08:46:37.265504 | orchestrator | Wednesday 19 February 2025 08:46:37 +0000 (0:00:00.134) 0:00:32.072 **** 2025-02-19 08:46:37.464001 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '118242ed-6ea1-54c4-bfaa-1565dde441bc'}}) 2025-02-19 08:46:37.464730 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f77e8fc9-ceed-59c4-8328-4d335fb6ee54'}}) 2025-02-19 08:46:37.465947 | orchestrator | 2025-02-19 08:46:37.468260 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-19 08:46:37.468750 | orchestrator | Wednesday 19 February 2025 08:46:37 +0000 (0:00:00.200) 0:00:32.272 **** 2025-02-19 08:46:37.636350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '118242ed-6ea1-54c4-bfaa-1565dde441bc'}})  2025-02-19 08:46:37.636900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f77e8fc9-ceed-59c4-8328-4d335fb6ee54'}})  2025-02-19 08:46:37.637637 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:37.639753 | orchestrator | 2025-02-19 08:46:37.810544 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-19 08:46:37.810636 | orchestrator | Wednesday 19 February 2025 08:46:37 +0000 (0:00:00.172) 0:00:32.444 **** 2025-02-19 08:46:37.810691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '118242ed-6ea1-54c4-bfaa-1565dde441bc'}})  2025-02-19 08:46:37.810810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f77e8fc9-ceed-59c4-8328-4d335fb6ee54'}})  2025-02-19 08:46:37.811577 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:37.812773 | orchestrator | 2025-02-19 08:46:37.814523 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-19 08:46:37.815541 | orchestrator | Wednesday 19 February 2025 08:46:37 +0000 (0:00:00.175) 0:00:32.620 **** 2025-02-19 08:46:38.216923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '118242ed-6ea1-54c4-bfaa-1565dde441bc'}})  2025-02-19 08:46:38.217089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f77e8fc9-ceed-59c4-8328-4d335fb6ee54'}})  2025-02-19 08:46:38.217619 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:38.218207 | orchestrator | 2025-02-19 08:46:38.221309 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-19 08:46:38.373130 | orchestrator | Wednesday 19 February 2025 08:46:38 +0000 (0:00:00.405) 0:00:33.025 **** 2025-02-19 08:46:38.373263 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:46:38.374067 | orchestrator | 2025-02-19 08:46:38.376198 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-19 08:46:38.529204 | orchestrator | Wednesday 19 February 2025 08:46:38 +0000 (0:00:00.155) 0:00:33.180 **** 2025-02-19 08:46:38.529329 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:46:38.530161 | orchestrator | 2025-02-19 08:46:38.531121 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-19 08:46:38.532029 | orchestrator | Wednesday 19 February 2025 08:46:38 +0000 (0:00:00.157) 0:00:33.338 **** 2025-02-19 08:46:38.689052 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:38.690226 | orchestrator | 2025-02-19 08:46:38.691192 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-19 08:46:38.692709 | orchestrator | Wednesday 19 February 2025 08:46:38 +0000 (0:00:00.157) 0:00:33.495 **** 2025-02-19 08:46:38.832333 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:38.833473 | orchestrator | 2025-02-19 08:46:38.834564 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-19 08:46:38.835462 | orchestrator | Wednesday 19 February 2025 08:46:38 +0000 (0:00:00.145) 0:00:33.641 **** 2025-02-19 08:46:38.971917 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:38.972508 | orchestrator | 2025-02-19 08:46:38.974392 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-19 08:46:38.975070 | orchestrator | Wednesday 19 February 2025 08:46:38 +0000 (0:00:00.139) 0:00:33.780 **** 2025-02-19 08:46:39.117524 | orchestrator | ok: [testbed-node-4] => { 2025-02-19 08:46:39.119690 | orchestrator |  "ceph_osd_devices": { 2025-02-19 08:46:39.120987 | orchestrator |  "sdb": { 2025-02-19 08:46:39.121019 | orchestrator |  "osd_lvm_uuid": "118242ed-6ea1-54c4-bfaa-1565dde441bc" 2025-02-19 08:46:39.121040 | orchestrator |  }, 2025-02-19 08:46:39.123158 | orchestrator |  "sdc": { 2025-02-19 08:46:39.124263 | orchestrator |  "osd_lvm_uuid": "f77e8fc9-ceed-59c4-8328-4d335fb6ee54" 2025-02-19 08:46:39.125564 | orchestrator |  } 2025-02-19 08:46:39.126091 | orchestrator |  } 2025-02-19 08:46:39.126837 | orchestrator | } 2025-02-19 08:46:39.127550 | orchestrator | 2025-02-19 08:46:39.128212 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-19 08:46:39.128604 | orchestrator | Wednesday 19 February 2025 08:46:39 +0000 (0:00:00.146) 0:00:33.926 **** 2025-02-19 08:46:39.270274 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:39.270798 | orchestrator | 2025-02-19 08:46:39.270838 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-19 08:46:39.270857 | orchestrator | Wednesday 19 February 2025 08:46:39 +0000 (0:00:00.153) 0:00:34.079 **** 2025-02-19 08:46:39.404410 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:39.404712 | orchestrator | 2025-02-19 08:46:39.404768 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-19 08:46:39.548734 | orchestrator | Wednesday 19 February 2025 08:46:39 +0000 (0:00:00.131) 0:00:34.211 **** 2025-02-19 08:46:39.548860 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:46:39.550278 | orchestrator | 2025-02-19 08:46:39.550306 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-19 08:46:39.550327 | orchestrator | Wednesday 19 February 2025 08:46:39 +0000 (0:00:00.144) 0:00:34.356 **** 2025-02-19 08:46:40.056935 | orchestrator | changed: [testbed-node-4] => { 2025-02-19 08:46:40.057505 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-19 08:46:40.058475 | orchestrator |  "ceph_osd_devices": { 2025-02-19 08:46:40.059803 | orchestrator |  "sdb": { 2025-02-19 08:46:40.059901 | orchestrator |  "osd_lvm_uuid": "118242ed-6ea1-54c4-bfaa-1565dde441bc" 2025-02-19 08:46:40.061185 | orchestrator |  }, 2025-02-19 08:46:40.061853 | orchestrator |  "sdc": { 2025-02-19 08:46:40.062346 | orchestrator |  "osd_lvm_uuid": "f77e8fc9-ceed-59c4-8328-4d335fb6ee54" 2025-02-19 08:46:40.063197 | orchestrator |  } 2025-02-19 08:46:40.063725 | orchestrator |  }, 2025-02-19 08:46:40.064395 | orchestrator |  "lvm_volumes": [ 2025-02-19 08:46:40.064833 | orchestrator |  { 2025-02-19 08:46:40.065506 | orchestrator |  "data": "osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc", 2025-02-19 08:46:40.065687 | orchestrator |  "data_vg": "ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc" 2025-02-19 08:46:40.066222 | orchestrator |  }, 2025-02-19 08:46:40.066553 | orchestrator |  { 2025-02-19 08:46:40.067851 | orchestrator |  "data": "osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54", 2025-02-19 08:46:40.068332 | orchestrator |  "data_vg": "ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54" 2025-02-19 08:46:40.069350 | orchestrator |  } 2025-02-19 08:46:40.069804 | orchestrator |  ] 2025-02-19 08:46:40.070404 | orchestrator |  } 2025-02-19 08:46:40.071168 | orchestrator | } 2025-02-19 08:46:40.072073 | orchestrator | 2025-02-19 08:46:40.073269 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-19 08:46:40.073634 | orchestrator | Wednesday 19 February 2025 08:46:40 +0000 (0:00:00.503) 0:00:34.859 **** 2025-02-19 08:46:41.556704 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-19 08:46:41.557237 | orchestrator | 2025-02-19 08:46:41.557292 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-19 08:46:41.558506 | orchestrator | 2025-02-19 08:46:41.558620 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-19 08:46:41.558673 | orchestrator | Wednesday 19 February 2025 08:46:41 +0000 (0:00:01.503) 0:00:36.363 **** 2025-02-19 08:46:41.802140 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-19 08:46:41.802357 | orchestrator | 2025-02-19 08:46:41.804989 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-19 08:46:41.805762 | orchestrator | Wednesday 19 February 2025 08:46:41 +0000 (0:00:00.245) 0:00:36.609 **** 2025-02-19 08:46:42.041464 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:46:42.041607 | orchestrator | 2025-02-19 08:46:42.041623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:42.041914 | orchestrator | Wednesday 19 February 2025 08:46:42 +0000 (0:00:00.240) 0:00:36.850 **** 2025-02-19 08:46:42.648037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-02-19 08:46:42.651832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-02-19 08:46:42.655384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-02-19 08:46:42.655634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-02-19 08:46:42.657896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-02-19 08:46:42.658967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-02-19 08:46:42.660483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-02-19 08:46:42.661108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-02-19 08:46:42.661680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-02-19 08:46:42.662253 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-02-19 08:46:42.663030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-02-19 08:46:42.663636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-02-19 08:46:42.664170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-02-19 08:46:42.665089 | orchestrator | 2025-02-19 08:46:42.665529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:42.665971 | orchestrator | Wednesday 19 February 2025 08:46:42 +0000 (0:00:00.605) 0:00:37.455 **** 2025-02-19 08:46:42.891627 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:42.891868 | orchestrator | 2025-02-19 08:46:42.892349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:42.892990 | orchestrator | Wednesday 19 February 2025 08:46:42 +0000 (0:00:00.245) 0:00:37.700 **** 2025-02-19 08:46:43.103901 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:43.104272 | orchestrator | 2025-02-19 08:46:43.105273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:43.106083 | orchestrator | Wednesday 19 February 2025 08:46:43 +0000 (0:00:00.211) 0:00:37.912 **** 2025-02-19 08:46:43.320058 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:43.320444 | orchestrator | 2025-02-19 08:46:43.321014 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:43.321320 | orchestrator | Wednesday 19 February 2025 08:46:43 +0000 (0:00:00.216) 0:00:38.129 **** 2025-02-19 08:46:43.554354 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:43.555178 | orchestrator | 2025-02-19 08:46:43.557978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:43.756216 | orchestrator | Wednesday 19 February 2025 08:46:43 +0000 (0:00:00.233) 0:00:38.362 **** 2025-02-19 08:46:43.756345 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:43.757227 | orchestrator | 2025-02-19 08:46:43.759274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:43.759798 | orchestrator | Wednesday 19 February 2025 08:46:43 +0000 (0:00:00.201) 0:00:38.564 **** 2025-02-19 08:46:43.977111 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:43.977625 | orchestrator | 2025-02-19 08:46:43.978881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:43.979343 | orchestrator | Wednesday 19 February 2025 08:46:43 +0000 (0:00:00.221) 0:00:38.786 **** 2025-02-19 08:46:44.198116 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:44.391357 | orchestrator | 2025-02-19 08:46:44.391477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:44.391516 | orchestrator | Wednesday 19 February 2025 08:46:44 +0000 (0:00:00.220) 0:00:39.006 **** 2025-02-19 08:46:44.391550 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:44.394996 | orchestrator | 2025-02-19 08:46:44.396424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:44.397796 | orchestrator | Wednesday 19 February 2025 08:46:44 +0000 (0:00:00.194) 0:00:39.201 **** 2025-02-19 08:46:45.078322 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb) 2025-02-19 08:46:45.078807 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb) 2025-02-19 08:46:45.081984 | orchestrator | 2025-02-19 08:46:46.047015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:46.047128 | orchestrator | Wednesday 19 February 2025 08:46:45 +0000 (0:00:00.684) 0:00:39.886 **** 2025-02-19 08:46:46.047160 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_eb5d754e-727a-4983-9d71-2a65afff7a52) 2025-02-19 08:46:46.047942 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_eb5d754e-727a-4983-9d71-2a65afff7a52) 2025-02-19 08:46:46.048593 | orchestrator | 2025-02-19 08:46:46.048972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:46.051498 | orchestrator | Wednesday 19 February 2025 08:46:46 +0000 (0:00:00.969) 0:00:40.855 **** 2025-02-19 08:46:46.512085 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_00a01370-945d-463a-a32d-5e52b5234eb4) 2025-02-19 08:46:46.512472 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_00a01370-945d-463a-a32d-5e52b5234eb4) 2025-02-19 08:46:46.512490 | orchestrator | 2025-02-19 08:46:46.512496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:46.512507 | orchestrator | Wednesday 19 February 2025 08:46:46 +0000 (0:00:00.464) 0:00:41.320 **** 2025-02-19 08:46:46.980800 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_933f95c9-b090-4d95-b9b7-90a087e62286) 2025-02-19 08:46:46.981504 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_933f95c9-b090-4d95-b9b7-90a087e62286) 2025-02-19 08:46:46.982559 | orchestrator | 2025-02-19 08:46:46.983482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:46:46.984075 | orchestrator | Wednesday 19 February 2025 08:46:46 +0000 (0:00:00.468) 0:00:41.788 **** 2025-02-19 08:46:47.382705 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-19 08:46:47.382918 | orchestrator | 2025-02-19 08:46:47.384447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:47.840456 | orchestrator | Wednesday 19 February 2025 08:46:47 +0000 (0:00:00.402) 0:00:42.190 **** 2025-02-19 08:46:47.840624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-02-19 08:46:47.841237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-02-19 08:46:47.842936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-02-19 08:46:47.843613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-02-19 08:46:47.844338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-02-19 08:46:47.845381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-02-19 08:46:47.846128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-02-19 08:46:47.846417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-02-19 08:46:47.847121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-02-19 08:46:47.847459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-02-19 08:46:47.848153 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-02-19 08:46:47.848711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-02-19 08:46:47.849580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-02-19 08:46:47.850117 | orchestrator | 2025-02-19 08:46:47.850347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:47.850829 | orchestrator | Wednesday 19 February 2025 08:46:47 +0000 (0:00:00.457) 0:00:42.648 **** 2025-02-19 08:46:48.078376 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:48.078537 | orchestrator | 2025-02-19 08:46:48.080001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:48.080104 | orchestrator | Wednesday 19 February 2025 08:46:48 +0000 (0:00:00.239) 0:00:42.888 **** 2025-02-19 08:46:48.302279 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:48.303346 | orchestrator | 2025-02-19 08:46:48.303731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:48.304886 | orchestrator | Wednesday 19 February 2025 08:46:48 +0000 (0:00:00.223) 0:00:43.111 **** 2025-02-19 08:46:48.540118 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:48.540989 | orchestrator | 2025-02-19 08:46:48.541556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:48.542797 | orchestrator | Wednesday 19 February 2025 08:46:48 +0000 (0:00:00.237) 0:00:43.349 **** 2025-02-19 08:46:48.749687 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:48.750544 | orchestrator | 2025-02-19 08:46:48.751102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:48.751907 | orchestrator | Wednesday 19 February 2025 08:46:48 +0000 (0:00:00.209) 0:00:43.558 **** 2025-02-19 08:46:48.966882 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:48.970456 | orchestrator | 2025-02-19 08:46:49.625491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:49.625633 | orchestrator | Wednesday 19 February 2025 08:46:48 +0000 (0:00:00.215) 0:00:43.774 **** 2025-02-19 08:46:49.625702 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:49.625764 | orchestrator | 2025-02-19 08:46:49.625783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:49.626095 | orchestrator | Wednesday 19 February 2025 08:46:49 +0000 (0:00:00.659) 0:00:44.434 **** 2025-02-19 08:46:49.840008 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:49.841144 | orchestrator | 2025-02-19 08:46:49.842263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:49.843578 | orchestrator | Wednesday 19 February 2025 08:46:49 +0000 (0:00:00.213) 0:00:44.648 **** 2025-02-19 08:46:50.061926 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:50.062521 | orchestrator | 2025-02-19 08:46:50.063843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:50.064477 | orchestrator | Wednesday 19 February 2025 08:46:50 +0000 (0:00:00.222) 0:00:44.870 **** 2025-02-19 08:46:50.734976 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-02-19 08:46:50.736008 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-02-19 08:46:50.737350 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-02-19 08:46:50.739909 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-02-19 08:46:50.740249 | orchestrator | 2025-02-19 08:46:50.740288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:50.740329 | orchestrator | Wednesday 19 February 2025 08:46:50 +0000 (0:00:00.673) 0:00:45.543 **** 2025-02-19 08:46:50.949826 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:50.952075 | orchestrator | 2025-02-19 08:46:50.952797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:50.953805 | orchestrator | Wednesday 19 February 2025 08:46:50 +0000 (0:00:00.212) 0:00:45.756 **** 2025-02-19 08:46:51.146999 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:51.147424 | orchestrator | 2025-02-19 08:46:51.147695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:51.149025 | orchestrator | Wednesday 19 February 2025 08:46:51 +0000 (0:00:00.199) 0:00:45.955 **** 2025-02-19 08:46:51.403008 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:51.403360 | orchestrator | 2025-02-19 08:46:51.403401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:46:51.403715 | orchestrator | Wednesday 19 February 2025 08:46:51 +0000 (0:00:00.256) 0:00:46.212 **** 2025-02-19 08:46:51.645156 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:51.646567 | orchestrator | 2025-02-19 08:46:51.649424 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-19 08:46:51.651034 | orchestrator | Wednesday 19 February 2025 08:46:51 +0000 (0:00:00.240) 0:00:46.452 **** 2025-02-19 08:46:51.829973 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-02-19 08:46:51.830327 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-02-19 08:46:51.833161 | orchestrator | 2025-02-19 08:46:51.833706 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-19 08:46:51.834339 | orchestrator | Wednesday 19 February 2025 08:46:51 +0000 (0:00:00.182) 0:00:46.635 **** 2025-02-19 08:46:51.984376 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:51.984517 | orchestrator | 2025-02-19 08:46:51.988698 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-19 08:46:52.125978 | orchestrator | Wednesday 19 February 2025 08:46:51 +0000 (0:00:00.157) 0:00:46.793 **** 2025-02-19 08:46:52.126197 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:52.126951 | orchestrator | 2025-02-19 08:46:52.127143 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-19 08:46:52.128258 | orchestrator | Wednesday 19 February 2025 08:46:52 +0000 (0:00:00.139) 0:00:46.932 **** 2025-02-19 08:46:52.503574 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:52.506418 | orchestrator | 2025-02-19 08:46:52.713234 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-19 08:46:52.713356 | orchestrator | Wednesday 19 February 2025 08:46:52 +0000 (0:00:00.379) 0:00:47.312 **** 2025-02-19 08:46:52.713382 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:46:52.713427 | orchestrator | 2025-02-19 08:46:52.714363 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-19 08:46:52.715357 | orchestrator | Wednesday 19 February 2025 08:46:52 +0000 (0:00:00.207) 0:00:47.520 **** 2025-02-19 08:46:52.925622 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '45b4b457-0c8f-5565-8330-30b761ce6399'}}) 2025-02-19 08:46:52.926121 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '185b0f4c-91cb-52bd-aac1-e01f69de71f3'}}) 2025-02-19 08:46:52.927217 | orchestrator | 2025-02-19 08:46:52.929706 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-19 08:46:52.930066 | orchestrator | Wednesday 19 February 2025 08:46:52 +0000 (0:00:00.212) 0:00:47.732 **** 2025-02-19 08:46:53.095307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '45b4b457-0c8f-5565-8330-30b761ce6399'}})  2025-02-19 08:46:53.095785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '185b0f4c-91cb-52bd-aac1-e01f69de71f3'}})  2025-02-19 08:46:53.096987 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:53.099292 | orchestrator | 2025-02-19 08:46:53.285627 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-19 08:46:53.285823 | orchestrator | Wednesday 19 February 2025 08:46:53 +0000 (0:00:00.171) 0:00:47.904 **** 2025-02-19 08:46:53.285855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '45b4b457-0c8f-5565-8330-30b761ce6399'}})  2025-02-19 08:46:53.286900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '185b0f4c-91cb-52bd-aac1-e01f69de71f3'}})  2025-02-19 08:46:53.287391 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:53.287425 | orchestrator | 2025-02-19 08:46:53.288151 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-19 08:46:53.288460 | orchestrator | Wednesday 19 February 2025 08:46:53 +0000 (0:00:00.190) 0:00:48.095 **** 2025-02-19 08:46:53.487578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '45b4b457-0c8f-5565-8330-30b761ce6399'}})  2025-02-19 08:46:53.488124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '185b0f4c-91cb-52bd-aac1-e01f69de71f3'}})  2025-02-19 08:46:53.488173 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:53.488403 | orchestrator | 2025-02-19 08:46:53.489235 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-19 08:46:53.489677 | orchestrator | Wednesday 19 February 2025 08:46:53 +0000 (0:00:00.198) 0:00:48.293 **** 2025-02-19 08:46:53.653304 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:46:53.654143 | orchestrator | 2025-02-19 08:46:53.654282 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-19 08:46:53.654813 | orchestrator | Wednesday 19 February 2025 08:46:53 +0000 (0:00:00.169) 0:00:48.462 **** 2025-02-19 08:46:53.830268 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:46:53.832278 | orchestrator | 2025-02-19 08:46:53.833212 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-19 08:46:53.833255 | orchestrator | Wednesday 19 February 2025 08:46:53 +0000 (0:00:00.176) 0:00:48.639 **** 2025-02-19 08:46:53.970468 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:53.972145 | orchestrator | 2025-02-19 08:46:53.972401 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-19 08:46:53.972994 | orchestrator | Wednesday 19 February 2025 08:46:53 +0000 (0:00:00.139) 0:00:48.779 **** 2025-02-19 08:46:54.140247 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:54.140449 | orchestrator | 2025-02-19 08:46:54.140485 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-19 08:46:54.140945 | orchestrator | Wednesday 19 February 2025 08:46:54 +0000 (0:00:00.170) 0:00:48.950 **** 2025-02-19 08:46:54.289731 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:54.290228 | orchestrator | 2025-02-19 08:46:54.290730 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-19 08:46:54.291700 | orchestrator | Wednesday 19 February 2025 08:46:54 +0000 (0:00:00.148) 0:00:49.099 **** 2025-02-19 08:46:54.428929 | orchestrator | ok: [testbed-node-5] => { 2025-02-19 08:46:54.430562 | orchestrator |  "ceph_osd_devices": { 2025-02-19 08:46:54.433943 | orchestrator |  "sdb": { 2025-02-19 08:46:54.434795 | orchestrator |  "osd_lvm_uuid": "45b4b457-0c8f-5565-8330-30b761ce6399" 2025-02-19 08:46:54.434830 | orchestrator |  }, 2025-02-19 08:46:54.434853 | orchestrator |  "sdc": { 2025-02-19 08:46:54.436065 | orchestrator |  "osd_lvm_uuid": "185b0f4c-91cb-52bd-aac1-e01f69de71f3" 2025-02-19 08:46:54.436685 | orchestrator |  } 2025-02-19 08:46:54.437672 | orchestrator |  } 2025-02-19 08:46:54.437894 | orchestrator | } 2025-02-19 08:46:54.437977 | orchestrator | 2025-02-19 08:46:54.438823 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-19 08:46:54.438951 | orchestrator | Wednesday 19 February 2025 08:46:54 +0000 (0:00:00.137) 0:00:49.236 **** 2025-02-19 08:46:54.825627 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:54.825888 | orchestrator | 2025-02-19 08:46:54.827541 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-19 08:46:54.827948 | orchestrator | Wednesday 19 February 2025 08:46:54 +0000 (0:00:00.397) 0:00:49.634 **** 2025-02-19 08:46:54.977505 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:54.979208 | orchestrator | 2025-02-19 08:46:54.980273 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-19 08:46:54.981240 | orchestrator | Wednesday 19 February 2025 08:46:54 +0000 (0:00:00.150) 0:00:49.785 **** 2025-02-19 08:46:55.125003 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:46:55.125313 | orchestrator | 2025-02-19 08:46:55.126909 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-19 08:46:55.128082 | orchestrator | Wednesday 19 February 2025 08:46:55 +0000 (0:00:00.148) 0:00:49.934 **** 2025-02-19 08:46:55.419800 | orchestrator | changed: [testbed-node-5] => { 2025-02-19 08:46:55.420203 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-19 08:46:55.420841 | orchestrator |  "ceph_osd_devices": { 2025-02-19 08:46:55.421475 | orchestrator |  "sdb": { 2025-02-19 08:46:55.423719 | orchestrator |  "osd_lvm_uuid": "45b4b457-0c8f-5565-8330-30b761ce6399" 2025-02-19 08:46:55.425422 | orchestrator |  }, 2025-02-19 08:46:55.426396 | orchestrator |  "sdc": { 2025-02-19 08:46:55.427266 | orchestrator |  "osd_lvm_uuid": "185b0f4c-91cb-52bd-aac1-e01f69de71f3" 2025-02-19 08:46:55.427927 | orchestrator |  } 2025-02-19 08:46:55.428125 | orchestrator |  }, 2025-02-19 08:46:55.429066 | orchestrator |  "lvm_volumes": [ 2025-02-19 08:46:55.429226 | orchestrator |  { 2025-02-19 08:46:55.430261 | orchestrator |  "data": "osd-block-45b4b457-0c8f-5565-8330-30b761ce6399", 2025-02-19 08:46:55.430738 | orchestrator |  "data_vg": "ceph-45b4b457-0c8f-5565-8330-30b761ce6399" 2025-02-19 08:46:55.431019 | orchestrator |  }, 2025-02-19 08:46:55.431830 | orchestrator |  { 2025-02-19 08:46:55.432060 | orchestrator |  "data": "osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3", 2025-02-19 08:46:55.432505 | orchestrator |  "data_vg": "ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3" 2025-02-19 08:46:55.432899 | orchestrator |  } 2025-02-19 08:46:55.433291 | orchestrator |  ] 2025-02-19 08:46:55.433752 | orchestrator |  } 2025-02-19 08:46:55.434177 | orchestrator | } 2025-02-19 08:46:55.434625 | orchestrator | 2025-02-19 08:46:55.435022 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-19 08:46:55.435376 | orchestrator | Wednesday 19 February 2025 08:46:55 +0000 (0:00:00.295) 0:00:50.229 **** 2025-02-19 08:46:56.567106 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-19 08:46:56.567882 | orchestrator | 2025-02-19 08:46:56.568927 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:46:56.568983 | orchestrator | 2025-02-19 08:46:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:46:56.570006 | orchestrator | 2025-02-19 08:46:56 | INFO  | Please wait and do not abort execution. 2025-02-19 08:46:56.570093 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-19 08:46:56.570407 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-19 08:46:56.571415 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-19 08:46:56.572572 | orchestrator | 2025-02-19 08:46:56.572819 | orchestrator | 2025-02-19 08:46:56.574012 | orchestrator | 2025-02-19 08:46:56.574973 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:46:56.575007 | orchestrator | Wednesday 19 February 2025 08:46:56 +0000 (0:00:01.142) 0:00:51.372 **** 2025-02-19 08:46:56.575819 | orchestrator | =============================================================================== 2025-02-19 08:46:56.576662 | orchestrator | Write configuration file ------------------------------------------------ 5.54s 2025-02-19 08:46:56.576945 | orchestrator | Add known links to the list of available block devices ------------------ 1.91s 2025-02-19 08:46:56.577964 | orchestrator | Add known partitions to the list of available block devices ------------- 1.84s 2025-02-19 08:46:56.578906 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2025-02-19 08:46:56.578944 | orchestrator | Print configuration data ------------------------------------------------ 1.11s 2025-02-19 08:46:56.579151 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2025-02-19 08:46:56.579861 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.91s 2025-02-19 08:46:56.580217 | orchestrator | Get initial list of available block devices ----------------------------- 0.85s 2025-02-19 08:46:56.581026 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2025-02-19 08:46:56.581634 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-02-19 08:46:56.581924 | orchestrator | Print WAL devices ------------------------------------------------------- 0.79s 2025-02-19 08:46:56.582441 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.76s 2025-02-19 08:46:56.582830 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-02-19 08:46:56.583366 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-02-19 08:46:56.584065 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-02-19 08:46:56.584314 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-02-19 08:46:56.584872 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.72s 2025-02-19 08:46:56.585331 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.69s 2025-02-19 08:46:56.585832 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.69s 2025-02-19 08:46:56.586133 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-02-19 08:46:58.854836 | orchestrator | 2025-02-19 08:46:58 | INFO  | Task 61990e1c-142e-4bf3-8732-ac6040bb6e49 is running in background. Output coming soon. 2025-02-19 08:47:50.762126 | orchestrator | 2025-02-19 08:47:41 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-02-19 08:47:52.466966 | orchestrator | 2025-02-19 08:47:41 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-02-19 08:47:52.467088 | orchestrator | 2025-02-19 08:47:41 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-02-19 08:47:52.467109 | orchestrator | 2025-02-19 08:47:41 | INFO  | Handling group overwrites in 99-overwrite 2025-02-19 08:47:52.467139 | orchestrator | 2025-02-19 08:47:41 | INFO  | Removing group ceph-mds from 50-ceph 2025-02-19 08:47:52.467169 | orchestrator | 2025-02-19 08:47:41 | INFO  | Removing group ceph-rgw from 50-ceph 2025-02-19 08:47:52.467184 | orchestrator | 2025-02-19 08:47:41 | INFO  | Removing group netbird:children from 50-infrastruture 2025-02-19 08:47:52.467198 | orchestrator | 2025-02-19 08:47:41 | INFO  | Removing group storage:children from 50-kolla 2025-02-19 08:47:52.467213 | orchestrator | 2025-02-19 08:47:42 | INFO  | Removing group frr:children from 60-generic 2025-02-19 08:47:52.467227 | orchestrator | 2025-02-19 08:47:42 | INFO  | Handling group overwrites in 20-roles 2025-02-19 08:47:52.467242 | orchestrator | 2025-02-19 08:47:42 | INFO  | Removing group k3s_node from 50-infrastruture 2025-02-19 08:47:52.467283 | orchestrator | 2025-02-19 08:47:42 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-02-19 08:47:52.467298 | orchestrator | 2025-02-19 08:47:50 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-02-19 08:47:52.467331 | orchestrator | 2025-02-19 08:47:52 | INFO  | Task 4bb125ae-acd6-4c9d-b311-847012562ba7 (ceph-create-lvm-devices) was prepared for execution. 2025-02-19 08:47:55.658624 | orchestrator | 2025-02-19 08:47:52 | INFO  | It takes a moment until task 4bb125ae-acd6-4c9d-b311-847012562ba7 (ceph-create-lvm-devices) has been started and output is visible here. 2025-02-19 08:47:55.658820 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-19 08:47:56.152505 | orchestrator | 2025-02-19 08:47:56.152637 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-19 08:47:56.153774 | orchestrator | 2025-02-19 08:47:56.155835 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-19 08:47:56.394608 | orchestrator | Wednesday 19 February 2025 08:47:56 +0000 (0:00:00.423) 0:00:00.423 **** 2025-02-19 08:47:56.394772 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-19 08:47:56.394898 | orchestrator | 2025-02-19 08:47:56.395855 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-19 08:47:56.396781 | orchestrator | Wednesday 19 February 2025 08:47:56 +0000 (0:00:00.243) 0:00:00.667 **** 2025-02-19 08:47:56.626187 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:47:56.626359 | orchestrator | 2025-02-19 08:47:56.626391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:47:56.627352 | orchestrator | Wednesday 19 February 2025 08:47:56 +0000 (0:00:00.231) 0:00:00.899 **** 2025-02-19 08:47:57.385719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-02-19 08:47:57.385924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-02-19 08:47:57.386763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-02-19 08:47:57.388573 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-02-19 08:47:57.389173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-02-19 08:47:57.390298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-02-19 08:47:57.391451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-02-19 08:47:57.392290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-02-19 08:47:57.392815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-02-19 08:47:57.393237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-02-19 08:47:57.393945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-02-19 08:47:57.394776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-02-19 08:47:57.394871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-02-19 08:47:57.395306 | orchestrator | 2025-02-19 08:47:57.395703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:47:57.395949 | orchestrator | Wednesday 19 February 2025 08:47:57 +0000 (0:00:00.757) 0:00:01.656 **** 2025-02-19 08:47:57.595518 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:47:57.595793 | orchestrator | 2025-02-19 08:47:57.595834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:47:57.596857 | orchestrator | Wednesday 19 February 2025 08:47:57 +0000 (0:00:00.212) 0:00:01.868 **** 2025-02-19 08:47:57.795467 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:47:57.795744 | orchestrator | 2025-02-19 08:47:57.797000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:47:57.993835 | orchestrator | Wednesday 19 February 2025 08:47:57 +0000 (0:00:00.199) 0:00:02.068 **** 2025-02-19 08:47:57.993980 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:47:57.994467 | orchestrator | 2025-02-19 08:47:57.994508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:47:57.995342 | orchestrator | Wednesday 19 February 2025 08:47:57 +0000 (0:00:00.199) 0:00:02.267 **** 2025-02-19 08:47:58.214777 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:47:58.215382 | orchestrator | 2025-02-19 08:47:58.215735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:47:58.217984 | orchestrator | Wednesday 19 February 2025 08:47:58 +0000 (0:00:00.219) 0:00:02.487 **** 2025-02-19 08:47:58.423130 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:47:58.423521 | orchestrator | 2025-02-19 08:47:58.424245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:47:58.425199 | orchestrator | Wednesday 19 February 2025 08:47:58 +0000 (0:00:00.209) 0:00:02.696 **** 2025-02-19 08:47:58.623304 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:47:58.623458 | orchestrator | 2025-02-19 08:47:58.623476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:47:58.624582 | orchestrator | Wednesday 19 February 2025 08:47:58 +0000 (0:00:00.200) 0:00:02.897 **** 2025-02-19 08:47:58.829060 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:47:58.830758 | orchestrator | 2025-02-19 08:47:58.831550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:47:58.831709 | orchestrator | Wednesday 19 February 2025 08:47:58 +0000 (0:00:00.205) 0:00:03.103 **** 2025-02-19 08:47:59.046850 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:47:59.046988 | orchestrator | 2025-02-19 08:47:59.047728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:47:59.048090 | orchestrator | Wednesday 19 February 2025 08:47:59 +0000 (0:00:00.217) 0:00:03.320 **** 2025-02-19 08:47:59.641773 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283) 2025-02-19 08:47:59.642144 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283) 2025-02-19 08:47:59.643287 | orchestrator | 2025-02-19 08:47:59.643335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:47:59.643764 | orchestrator | Wednesday 19 February 2025 08:47:59 +0000 (0:00:00.594) 0:00:03.915 **** 2025-02-19 08:48:00.289053 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0f115ae7-332f-47b5-bfba-4efd1297123a) 2025-02-19 08:48:00.290362 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0f115ae7-332f-47b5-bfba-4efd1297123a) 2025-02-19 08:48:00.292212 | orchestrator | 2025-02-19 08:48:00.292424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:00.293886 | orchestrator | Wednesday 19 February 2025 08:48:00 +0000 (0:00:00.645) 0:00:04.560 **** 2025-02-19 08:48:00.781877 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7ac42676-4a1f-422d-9e47-87a492d5a795) 2025-02-19 08:48:00.782783 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7ac42676-4a1f-422d-9e47-87a492d5a795) 2025-02-19 08:48:00.784895 | orchestrator | 2025-02-19 08:48:00.785899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:00.786925 | orchestrator | Wednesday 19 February 2025 08:48:00 +0000 (0:00:00.492) 0:00:05.053 **** 2025-02-19 08:48:01.217988 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b50482d4-467d-4151-94c3-bb810c8ecc19) 2025-02-19 08:48:01.218879 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b50482d4-467d-4151-94c3-bb810c8ecc19) 2025-02-19 08:48:01.218913 | orchestrator | 2025-02-19 08:48:01.218926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:01.219090 | orchestrator | Wednesday 19 February 2025 08:48:01 +0000 (0:00:00.439) 0:00:05.492 **** 2025-02-19 08:48:01.565357 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-19 08:48:01.565791 | orchestrator | 2025-02-19 08:48:01.565962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:01.566613 | orchestrator | Wednesday 19 February 2025 08:48:01 +0000 (0:00:00.344) 0:00:05.837 **** 2025-02-19 08:48:02.053237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-02-19 08:48:02.056014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-02-19 08:48:02.057042 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-02-19 08:48:02.057103 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-02-19 08:48:02.058084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-02-19 08:48:02.058820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-02-19 08:48:02.059369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-02-19 08:48:02.060161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-02-19 08:48:02.060562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-02-19 08:48:02.061110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-02-19 08:48:02.062787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-02-19 08:48:02.063253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-02-19 08:48:02.063800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-02-19 08:48:02.064337 | orchestrator | 2025-02-19 08:48:02.064788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:02.065174 | orchestrator | Wednesday 19 February 2025 08:48:02 +0000 (0:00:00.488) 0:00:06.325 **** 2025-02-19 08:48:02.270626 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:02.271078 | orchestrator | 2025-02-19 08:48:02.271910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:02.273423 | orchestrator | Wednesday 19 February 2025 08:48:02 +0000 (0:00:00.218) 0:00:06.543 **** 2025-02-19 08:48:02.465389 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:02.465933 | orchestrator | 2025-02-19 08:48:02.467425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:02.469399 | orchestrator | Wednesday 19 February 2025 08:48:02 +0000 (0:00:00.195) 0:00:06.738 **** 2025-02-19 08:48:02.674202 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:02.675732 | orchestrator | 2025-02-19 08:48:02.676025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:02.676926 | orchestrator | Wednesday 19 February 2025 08:48:02 +0000 (0:00:00.208) 0:00:06.947 **** 2025-02-19 08:48:02.876524 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:02.880422 | orchestrator | 2025-02-19 08:48:02.881432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:02.881497 | orchestrator | Wednesday 19 February 2025 08:48:02 +0000 (0:00:00.202) 0:00:07.149 **** 2025-02-19 08:48:03.461746 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:03.462817 | orchestrator | 2025-02-19 08:48:03.464059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:03.465584 | orchestrator | Wednesday 19 February 2025 08:48:03 +0000 (0:00:00.586) 0:00:07.735 **** 2025-02-19 08:48:03.665228 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:03.665689 | orchestrator | 2025-02-19 08:48:03.667000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:03.668177 | orchestrator | Wednesday 19 February 2025 08:48:03 +0000 (0:00:00.202) 0:00:07.938 **** 2025-02-19 08:48:03.867278 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:03.867568 | orchestrator | 2025-02-19 08:48:03.868061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:03.868114 | orchestrator | Wednesday 19 February 2025 08:48:03 +0000 (0:00:00.200) 0:00:08.139 **** 2025-02-19 08:48:04.088761 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:04.089914 | orchestrator | 2025-02-19 08:48:04.090456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:04.091891 | orchestrator | Wednesday 19 February 2025 08:48:04 +0000 (0:00:00.221) 0:00:08.360 **** 2025-02-19 08:48:04.751229 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-02-19 08:48:04.752421 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-02-19 08:48:04.753735 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-02-19 08:48:04.755361 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-02-19 08:48:04.756773 | orchestrator | 2025-02-19 08:48:04.760011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:04.761259 | orchestrator | Wednesday 19 February 2025 08:48:04 +0000 (0:00:00.664) 0:00:09.025 **** 2025-02-19 08:48:04.955794 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:04.956724 | orchestrator | 2025-02-19 08:48:04.957576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:04.958426 | orchestrator | Wednesday 19 February 2025 08:48:04 +0000 (0:00:00.203) 0:00:09.228 **** 2025-02-19 08:48:05.185467 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:05.185718 | orchestrator | 2025-02-19 08:48:05.185763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:05.186678 | orchestrator | Wednesday 19 February 2025 08:48:05 +0000 (0:00:00.224) 0:00:09.453 **** 2025-02-19 08:48:05.384984 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:05.385902 | orchestrator | 2025-02-19 08:48:05.388819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:05.601607 | orchestrator | Wednesday 19 February 2025 08:48:05 +0000 (0:00:00.204) 0:00:09.658 **** 2025-02-19 08:48:05.601764 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:05.745770 | orchestrator | 2025-02-19 08:48:05.745884 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-19 08:48:05.745902 | orchestrator | Wednesday 19 February 2025 08:48:05 +0000 (0:00:00.215) 0:00:09.873 **** 2025-02-19 08:48:05.745933 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:05.746411 | orchestrator | 2025-02-19 08:48:05.746918 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-19 08:48:05.747996 | orchestrator | Wednesday 19 February 2025 08:48:05 +0000 (0:00:00.141) 0:00:10.015 **** 2025-02-19 08:48:05.951726 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ffe4904-1899-5051-bec6-9b9e5f20cdb9'}}) 2025-02-19 08:48:05.952523 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bbf6aa6c-a724-5ce6-b507-3cef42d33bac'}}) 2025-02-19 08:48:05.953377 | orchestrator | 2025-02-19 08:48:05.955054 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-19 08:48:05.955563 | orchestrator | Wednesday 19 February 2025 08:48:05 +0000 (0:00:00.205) 0:00:10.220 **** 2025-02-19 08:48:08.221377 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'}) 2025-02-19 08:48:08.221622 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'}) 2025-02-19 08:48:08.221724 | orchestrator | 2025-02-19 08:48:08.224305 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-19 08:48:08.225000 | orchestrator | Wednesday 19 February 2025 08:48:08 +0000 (0:00:02.271) 0:00:12.492 **** 2025-02-19 08:48:08.397045 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:08.398520 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:08.398563 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:08.401163 | orchestrator | 2025-02-19 08:48:08.402133 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-19 08:48:08.402172 | orchestrator | Wednesday 19 February 2025 08:48:08 +0000 (0:00:00.176) 0:00:12.668 **** 2025-02-19 08:48:09.890923 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'}) 2025-02-19 08:48:09.892883 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'}) 2025-02-19 08:48:09.893712 | orchestrator | 2025-02-19 08:48:09.894576 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-19 08:48:09.895296 | orchestrator | Wednesday 19 February 2025 08:48:09 +0000 (0:00:01.493) 0:00:14.162 **** 2025-02-19 08:48:10.079759 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:10.080313 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:10.081006 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:10.081768 | orchestrator | 2025-02-19 08:48:10.082216 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-19 08:48:10.082879 | orchestrator | Wednesday 19 February 2025 08:48:10 +0000 (0:00:00.191) 0:00:14.353 **** 2025-02-19 08:48:10.226691 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:10.228349 | orchestrator | 2025-02-19 08:48:10.230901 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-19 08:48:10.231447 | orchestrator | Wednesday 19 February 2025 08:48:10 +0000 (0:00:00.145) 0:00:14.499 **** 2025-02-19 08:48:10.401775 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:10.401979 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:10.402570 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:10.403781 | orchestrator | 2025-02-19 08:48:10.404230 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-19 08:48:10.405374 | orchestrator | Wednesday 19 February 2025 08:48:10 +0000 (0:00:00.174) 0:00:14.674 **** 2025-02-19 08:48:10.547785 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:10.548063 | orchestrator | 2025-02-19 08:48:10.548538 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-19 08:48:10.549915 | orchestrator | Wednesday 19 February 2025 08:48:10 +0000 (0:00:00.147) 0:00:14.821 **** 2025-02-19 08:48:10.717550 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:10.718484 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:10.718531 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:10.719723 | orchestrator | 2025-02-19 08:48:10.720317 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-19 08:48:10.721485 | orchestrator | Wednesday 19 February 2025 08:48:10 +0000 (0:00:00.169) 0:00:14.990 **** 2025-02-19 08:48:10.869038 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:10.869330 | orchestrator | 2025-02-19 08:48:10.869734 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-19 08:48:10.870460 | orchestrator | Wednesday 19 February 2025 08:48:10 +0000 (0:00:00.152) 0:00:15.142 **** 2025-02-19 08:48:11.202816 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:11.203229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:11.204808 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:11.204882 | orchestrator | 2025-02-19 08:48:11.204952 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-19 08:48:11.205746 | orchestrator | Wednesday 19 February 2025 08:48:11 +0000 (0:00:00.332) 0:00:15.475 **** 2025-02-19 08:48:11.337077 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:48:11.338475 | orchestrator | 2025-02-19 08:48:11.339540 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-19 08:48:11.340005 | orchestrator | Wednesday 19 February 2025 08:48:11 +0000 (0:00:00.135) 0:00:15.610 **** 2025-02-19 08:48:11.514538 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:11.515511 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:11.516854 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:11.517248 | orchestrator | 2025-02-19 08:48:11.518665 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-19 08:48:11.691845 | orchestrator | Wednesday 19 February 2025 08:48:11 +0000 (0:00:00.176) 0:00:15.787 **** 2025-02-19 08:48:11.692055 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:11.692165 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:11.693711 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:11.694257 | orchestrator | 2025-02-19 08:48:11.695465 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-19 08:48:11.696526 | orchestrator | Wednesday 19 February 2025 08:48:11 +0000 (0:00:00.177) 0:00:15.964 **** 2025-02-19 08:48:11.872499 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:11.872724 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:11.873837 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:11.873967 | orchestrator | 2025-02-19 08:48:11.876472 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-19 08:48:11.876723 | orchestrator | Wednesday 19 February 2025 08:48:11 +0000 (0:00:00.181) 0:00:16.145 **** 2025-02-19 08:48:12.015806 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:12.016163 | orchestrator | 2025-02-19 08:48:12.016207 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-19 08:48:12.016292 | orchestrator | Wednesday 19 February 2025 08:48:12 +0000 (0:00:00.142) 0:00:16.288 **** 2025-02-19 08:48:12.163530 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:12.163779 | orchestrator | 2025-02-19 08:48:12.163806 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-19 08:48:12.163827 | orchestrator | Wednesday 19 February 2025 08:48:12 +0000 (0:00:00.146) 0:00:16.434 **** 2025-02-19 08:48:12.297755 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:12.297882 | orchestrator | 2025-02-19 08:48:12.297895 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-19 08:48:12.297906 | orchestrator | Wednesday 19 February 2025 08:48:12 +0000 (0:00:00.137) 0:00:16.571 **** 2025-02-19 08:48:12.453338 | orchestrator | ok: [testbed-node-3] => { 2025-02-19 08:48:12.453867 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-19 08:48:12.454499 | orchestrator | } 2025-02-19 08:48:12.455636 | orchestrator | 2025-02-19 08:48:12.456603 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-19 08:48:12.457127 | orchestrator | Wednesday 19 February 2025 08:48:12 +0000 (0:00:00.152) 0:00:16.724 **** 2025-02-19 08:48:12.603083 | orchestrator | ok: [testbed-node-3] => { 2025-02-19 08:48:12.603387 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-19 08:48:12.604433 | orchestrator | } 2025-02-19 08:48:12.604904 | orchestrator | 2025-02-19 08:48:12.605694 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-19 08:48:12.606267 | orchestrator | Wednesday 19 February 2025 08:48:12 +0000 (0:00:00.151) 0:00:16.876 **** 2025-02-19 08:48:12.746542 | orchestrator | ok: [testbed-node-3] => { 2025-02-19 08:48:12.747229 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-19 08:48:12.747897 | orchestrator | } 2025-02-19 08:48:12.748916 | orchestrator | 2025-02-19 08:48:12.749226 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-19 08:48:12.750132 | orchestrator | Wednesday 19 February 2025 08:48:12 +0000 (0:00:00.143) 0:00:17.019 **** 2025-02-19 08:48:13.475776 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:48:13.476487 | orchestrator | 2025-02-19 08:48:13.477300 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-19 08:48:13.478523 | orchestrator | Wednesday 19 February 2025 08:48:13 +0000 (0:00:00.727) 0:00:17.747 **** 2025-02-19 08:48:14.110576 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:48:14.111269 | orchestrator | 2025-02-19 08:48:14.112228 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-19 08:48:14.113168 | orchestrator | Wednesday 19 February 2025 08:48:14 +0000 (0:00:00.636) 0:00:18.384 **** 2025-02-19 08:48:14.658419 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:48:14.661955 | orchestrator | 2025-02-19 08:48:14.662771 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-19 08:48:14.662828 | orchestrator | Wednesday 19 February 2025 08:48:14 +0000 (0:00:00.540) 0:00:18.924 **** 2025-02-19 08:48:14.799138 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:48:14.799885 | orchestrator | 2025-02-19 08:48:14.800581 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-19 08:48:14.801150 | orchestrator | Wednesday 19 February 2025 08:48:14 +0000 (0:00:00.148) 0:00:19.072 **** 2025-02-19 08:48:14.937566 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:14.938586 | orchestrator | 2025-02-19 08:48:14.939495 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-19 08:48:14.940520 | orchestrator | Wednesday 19 February 2025 08:48:14 +0000 (0:00:00.138) 0:00:19.211 **** 2025-02-19 08:48:15.041003 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:15.041222 | orchestrator | 2025-02-19 08:48:15.041970 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-19 08:48:15.042773 | orchestrator | Wednesday 19 February 2025 08:48:15 +0000 (0:00:00.103) 0:00:19.315 **** 2025-02-19 08:48:15.184991 | orchestrator | ok: [testbed-node-3] => { 2025-02-19 08:48:15.186107 | orchestrator |  "vgs_report": { 2025-02-19 08:48:15.187272 | orchestrator |  "vg": [] 2025-02-19 08:48:15.187923 | orchestrator |  } 2025-02-19 08:48:15.188780 | orchestrator | } 2025-02-19 08:48:15.189586 | orchestrator | 2025-02-19 08:48:15.190581 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-19 08:48:15.190861 | orchestrator | Wednesday 19 February 2025 08:48:15 +0000 (0:00:00.143) 0:00:19.458 **** 2025-02-19 08:48:15.325791 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:15.327196 | orchestrator | 2025-02-19 08:48:15.329157 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-19 08:48:15.486620 | orchestrator | Wednesday 19 February 2025 08:48:15 +0000 (0:00:00.140) 0:00:19.598 **** 2025-02-19 08:48:15.486771 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:15.486919 | orchestrator | 2025-02-19 08:48:15.488057 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-19 08:48:15.489010 | orchestrator | Wednesday 19 February 2025 08:48:15 +0000 (0:00:00.159) 0:00:19.757 **** 2025-02-19 08:48:15.619855 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:15.620381 | orchestrator | 2025-02-19 08:48:15.620441 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-19 08:48:15.620523 | orchestrator | Wednesday 19 February 2025 08:48:15 +0000 (0:00:00.136) 0:00:19.893 **** 2025-02-19 08:48:15.767781 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:15.767991 | orchestrator | 2025-02-19 08:48:15.768718 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-19 08:48:15.769383 | orchestrator | Wednesday 19 February 2025 08:48:15 +0000 (0:00:00.147) 0:00:20.041 **** 2025-02-19 08:48:15.928803 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:15.928998 | orchestrator | 2025-02-19 08:48:15.929605 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-19 08:48:15.929688 | orchestrator | Wednesday 19 February 2025 08:48:15 +0000 (0:00:00.159) 0:00:20.200 **** 2025-02-19 08:48:16.248012 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:16.248515 | orchestrator | 2025-02-19 08:48:16.249972 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-19 08:48:16.251352 | orchestrator | Wednesday 19 February 2025 08:48:16 +0000 (0:00:00.320) 0:00:20.521 **** 2025-02-19 08:48:16.422175 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:16.423270 | orchestrator | 2025-02-19 08:48:16.423316 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-19 08:48:16.423913 | orchestrator | Wednesday 19 February 2025 08:48:16 +0000 (0:00:00.173) 0:00:20.695 **** 2025-02-19 08:48:16.573026 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:16.573810 | orchestrator | 2025-02-19 08:48:16.575108 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-19 08:48:16.575720 | orchestrator | Wednesday 19 February 2025 08:48:16 +0000 (0:00:00.151) 0:00:20.847 **** 2025-02-19 08:48:16.716501 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:16.717334 | orchestrator | 2025-02-19 08:48:16.718501 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-19 08:48:16.718832 | orchestrator | Wednesday 19 February 2025 08:48:16 +0000 (0:00:00.142) 0:00:20.989 **** 2025-02-19 08:48:16.845076 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:16.848015 | orchestrator | 2025-02-19 08:48:16.848484 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-19 08:48:16.849338 | orchestrator | Wednesday 19 February 2025 08:48:16 +0000 (0:00:00.128) 0:00:21.118 **** 2025-02-19 08:48:16.994734 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:16.994935 | orchestrator | 2025-02-19 08:48:16.995585 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-19 08:48:16.996511 | orchestrator | Wednesday 19 February 2025 08:48:16 +0000 (0:00:00.150) 0:00:21.269 **** 2025-02-19 08:48:17.170248 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:17.171003 | orchestrator | 2025-02-19 08:48:17.171376 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-19 08:48:17.172396 | orchestrator | Wednesday 19 February 2025 08:48:17 +0000 (0:00:00.175) 0:00:21.444 **** 2025-02-19 08:48:17.315805 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:17.316839 | orchestrator | 2025-02-19 08:48:17.318711 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-19 08:48:17.319291 | orchestrator | Wednesday 19 February 2025 08:48:17 +0000 (0:00:00.145) 0:00:21.589 **** 2025-02-19 08:48:17.459878 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:17.460350 | orchestrator | 2025-02-19 08:48:17.463401 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-19 08:48:17.631800 | orchestrator | Wednesday 19 February 2025 08:48:17 +0000 (0:00:00.142) 0:00:21.731 **** 2025-02-19 08:48:17.631941 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:17.632044 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:17.633038 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:17.634134 | orchestrator | 2025-02-19 08:48:17.634337 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-19 08:48:17.634944 | orchestrator | Wednesday 19 February 2025 08:48:17 +0000 (0:00:00.173) 0:00:21.905 **** 2025-02-19 08:48:17.788835 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:17.794621 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:17.794768 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:17.795017 | orchestrator | 2025-02-19 08:48:17.795063 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-19 08:48:17.795979 | orchestrator | Wednesday 19 February 2025 08:48:17 +0000 (0:00:00.157) 0:00:22.062 **** 2025-02-19 08:48:17.966498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:17.967896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:17.969211 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:17.970569 | orchestrator | 2025-02-19 08:48:17.971969 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-19 08:48:17.972664 | orchestrator | Wednesday 19 February 2025 08:48:17 +0000 (0:00:00.176) 0:00:22.239 **** 2025-02-19 08:48:18.351055 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:18.351621 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:18.351897 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:18.352772 | orchestrator | 2025-02-19 08:48:18.354146 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-19 08:48:18.354528 | orchestrator | Wednesday 19 February 2025 08:48:18 +0000 (0:00:00.385) 0:00:22.624 **** 2025-02-19 08:48:18.524211 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:18.524495 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:18.525494 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:18.525960 | orchestrator | 2025-02-19 08:48:18.527866 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-19 08:48:18.685195 | orchestrator | Wednesday 19 February 2025 08:48:18 +0000 (0:00:00.172) 0:00:22.797 **** 2025-02-19 08:48:18.685411 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:18.686447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:18.687189 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:18.688354 | orchestrator | 2025-02-19 08:48:18.689271 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-19 08:48:18.690124 | orchestrator | Wednesday 19 February 2025 08:48:18 +0000 (0:00:00.161) 0:00:22.958 **** 2025-02-19 08:48:18.847934 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:18.848280 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:18.849388 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:18.850327 | orchestrator | 2025-02-19 08:48:18.851000 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-19 08:48:18.851598 | orchestrator | Wednesday 19 February 2025 08:48:18 +0000 (0:00:00.163) 0:00:23.121 **** 2025-02-19 08:48:19.015129 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:19.015592 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:19.016932 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:19.017613 | orchestrator | 2025-02-19 08:48:19.018947 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-19 08:48:19.019786 | orchestrator | Wednesday 19 February 2025 08:48:19 +0000 (0:00:00.166) 0:00:23.288 **** 2025-02-19 08:48:19.547698 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:48:19.547867 | orchestrator | 2025-02-19 08:48:19.548802 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-19 08:48:19.548856 | orchestrator | Wednesday 19 February 2025 08:48:19 +0000 (0:00:00.531) 0:00:23.820 **** 2025-02-19 08:48:20.102766 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:48:20.103574 | orchestrator | 2025-02-19 08:48:20.105259 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-19 08:48:20.105537 | orchestrator | Wednesday 19 February 2025 08:48:20 +0000 (0:00:00.555) 0:00:24.375 **** 2025-02-19 08:48:20.254443 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:48:20.254989 | orchestrator | 2025-02-19 08:48:20.256551 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-19 08:48:20.256959 | orchestrator | Wednesday 19 February 2025 08:48:20 +0000 (0:00:00.152) 0:00:24.528 **** 2025-02-19 08:48:20.449367 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'vg_name': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'}) 2025-02-19 08:48:20.450069 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'vg_name': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'}) 2025-02-19 08:48:20.450562 | orchestrator | 2025-02-19 08:48:20.451798 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-19 08:48:20.451994 | orchestrator | Wednesday 19 February 2025 08:48:20 +0000 (0:00:00.194) 0:00:24.723 **** 2025-02-19 08:48:20.632132 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:20.632363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:20.634252 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:20.634990 | orchestrator | 2025-02-19 08:48:20.635026 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-19 08:48:20.635569 | orchestrator | Wednesday 19 February 2025 08:48:20 +0000 (0:00:00.179) 0:00:24.903 **** 2025-02-19 08:48:20.812763 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:20.813045 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:20.813350 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:20.814213 | orchestrator | 2025-02-19 08:48:20.814728 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-19 08:48:20.815542 | orchestrator | Wednesday 19 February 2025 08:48:20 +0000 (0:00:00.183) 0:00:25.086 **** 2025-02-19 08:48:21.024577 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'})  2025-02-19 08:48:21.025808 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'})  2025-02-19 08:48:21.027055 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:48:21.027227 | orchestrator | 2025-02-19 08:48:21.030338 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-19 08:48:21.960806 | orchestrator | Wednesday 19 February 2025 08:48:21 +0000 (0:00:00.210) 0:00:25.297 **** 2025-02-19 08:48:21.960991 | orchestrator | ok: [testbed-node-3] => { 2025-02-19 08:48:21.961881 | orchestrator |  "lvm_report": { 2025-02-19 08:48:21.961962 | orchestrator |  "lv": [ 2025-02-19 08:48:21.962371 | orchestrator |  { 2025-02-19 08:48:21.963780 | orchestrator |  "lv_name": "osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9", 2025-02-19 08:48:21.963947 | orchestrator |  "vg_name": "ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9" 2025-02-19 08:48:21.964731 | orchestrator |  }, 2025-02-19 08:48:21.965532 | orchestrator |  { 2025-02-19 08:48:21.965904 | orchestrator |  "lv_name": "osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac", 2025-02-19 08:48:21.966176 | orchestrator |  "vg_name": "ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac" 2025-02-19 08:48:21.966773 | orchestrator |  } 2025-02-19 08:48:21.967100 | orchestrator |  ], 2025-02-19 08:48:21.968174 | orchestrator |  "pv": [ 2025-02-19 08:48:21.968476 | orchestrator |  { 2025-02-19 08:48:21.968500 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-19 08:48:21.968758 | orchestrator |  "vg_name": "ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9" 2025-02-19 08:48:21.969329 | orchestrator |  }, 2025-02-19 08:48:21.969668 | orchestrator |  { 2025-02-19 08:48:21.969956 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-19 08:48:21.970629 | orchestrator |  "vg_name": "ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac" 2025-02-19 08:48:21.970946 | orchestrator |  } 2025-02-19 08:48:21.971569 | orchestrator |  ] 2025-02-19 08:48:21.971742 | orchestrator |  } 2025-02-19 08:48:21.972022 | orchestrator | } 2025-02-19 08:48:21.972317 | orchestrator | 2025-02-19 08:48:21.972627 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-19 08:48:21.972878 | orchestrator | 2025-02-19 08:48:21.973506 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-19 08:48:21.973766 | orchestrator | Wednesday 19 February 2025 08:48:21 +0000 (0:00:00.934) 0:00:26.232 **** 2025-02-19 08:48:22.209701 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-19 08:48:22.210174 | orchestrator | 2025-02-19 08:48:22.210194 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-19 08:48:22.210502 | orchestrator | Wednesday 19 February 2025 08:48:22 +0000 (0:00:00.251) 0:00:26.483 **** 2025-02-19 08:48:22.480710 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:48:22.481027 | orchestrator | 2025-02-19 08:48:22.481718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:22.482452 | orchestrator | Wednesday 19 February 2025 08:48:22 +0000 (0:00:00.271) 0:00:26.754 **** 2025-02-19 08:48:23.235308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-02-19 08:48:23.235831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-02-19 08:48:23.236387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-02-19 08:48:23.239139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-02-19 08:48:23.239548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-02-19 08:48:23.239578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-02-19 08:48:23.239595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-02-19 08:48:23.239615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-02-19 08:48:23.240233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-02-19 08:48:23.240700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-02-19 08:48:23.241600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-02-19 08:48:23.241710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-02-19 08:48:23.242405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-02-19 08:48:23.242857 | orchestrator | 2025-02-19 08:48:23.243844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:23.244320 | orchestrator | Wednesday 19 February 2025 08:48:23 +0000 (0:00:00.753) 0:00:27.508 **** 2025-02-19 08:48:23.423044 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:23.423622 | orchestrator | 2025-02-19 08:48:23.423937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:23.425173 | orchestrator | Wednesday 19 February 2025 08:48:23 +0000 (0:00:00.188) 0:00:27.696 **** 2025-02-19 08:48:23.658275 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:23.659488 | orchestrator | 2025-02-19 08:48:23.660337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:23.661355 | orchestrator | Wednesday 19 February 2025 08:48:23 +0000 (0:00:00.234) 0:00:27.930 **** 2025-02-19 08:48:23.871713 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:23.872002 | orchestrator | 2025-02-19 08:48:23.872589 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:23.873675 | orchestrator | Wednesday 19 February 2025 08:48:23 +0000 (0:00:00.214) 0:00:28.144 **** 2025-02-19 08:48:24.104445 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:24.104595 | orchestrator | 2025-02-19 08:48:24.105185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:24.105339 | orchestrator | Wednesday 19 February 2025 08:48:24 +0000 (0:00:00.234) 0:00:28.379 **** 2025-02-19 08:48:24.304292 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:24.305863 | orchestrator | 2025-02-19 08:48:24.305921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:24.511147 | orchestrator | Wednesday 19 February 2025 08:48:24 +0000 (0:00:00.197) 0:00:28.577 **** 2025-02-19 08:48:24.511244 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:24.511807 | orchestrator | 2025-02-19 08:48:24.511824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:24.513513 | orchestrator | Wednesday 19 February 2025 08:48:24 +0000 (0:00:00.206) 0:00:28.783 **** 2025-02-19 08:48:24.713478 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:24.713745 | orchestrator | 2025-02-19 08:48:24.713976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:24.713991 | orchestrator | Wednesday 19 February 2025 08:48:24 +0000 (0:00:00.202) 0:00:28.985 **** 2025-02-19 08:48:24.920550 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:24.921267 | orchestrator | 2025-02-19 08:48:24.922377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:24.923202 | orchestrator | Wednesday 19 February 2025 08:48:24 +0000 (0:00:00.209) 0:00:29.195 **** 2025-02-19 08:48:25.625113 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6) 2025-02-19 08:48:25.627952 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6) 2025-02-19 08:48:25.628476 | orchestrator | 2025-02-19 08:48:25.628511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:25.629694 | orchestrator | Wednesday 19 February 2025 08:48:25 +0000 (0:00:00.702) 0:00:29.897 **** 2025-02-19 08:48:26.445924 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_923f2b44-0879-4277-a106-844be4b2565d) 2025-02-19 08:48:26.446264 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_923f2b44-0879-4277-a106-844be4b2565d) 2025-02-19 08:48:26.446297 | orchestrator | 2025-02-19 08:48:26.446322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:26.446984 | orchestrator | Wednesday 19 February 2025 08:48:26 +0000 (0:00:00.821) 0:00:30.719 **** 2025-02-19 08:48:26.902145 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0c5208c8-9aa1-4e87-9cdb-910770e18a0c) 2025-02-19 08:48:26.903834 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0c5208c8-9aa1-4e87-9cdb-910770e18a0c) 2025-02-19 08:48:26.904577 | orchestrator | 2025-02-19 08:48:26.905378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:26.906140 | orchestrator | Wednesday 19 February 2025 08:48:26 +0000 (0:00:00.455) 0:00:31.174 **** 2025-02-19 08:48:27.358508 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_69806146-708c-4195-b6c7-ec061db9d03d) 2025-02-19 08:48:27.360733 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_69806146-708c-4195-b6c7-ec061db9d03d) 2025-02-19 08:48:27.361266 | orchestrator | 2025-02-19 08:48:27.361919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:27.362901 | orchestrator | Wednesday 19 February 2025 08:48:27 +0000 (0:00:00.457) 0:00:31.631 **** 2025-02-19 08:48:27.697006 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-19 08:48:27.697559 | orchestrator | 2025-02-19 08:48:27.700214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:28.195351 | orchestrator | Wednesday 19 February 2025 08:48:27 +0000 (0:00:00.337) 0:00:31.969 **** 2025-02-19 08:48:28.195473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-02-19 08:48:28.196914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-02-19 08:48:28.197871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-02-19 08:48:28.198834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-02-19 08:48:28.200188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-02-19 08:48:28.200973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-02-19 08:48:28.201007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-02-19 08:48:28.202148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-02-19 08:48:28.202726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-02-19 08:48:28.203315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-02-19 08:48:28.204459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-02-19 08:48:28.205383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-02-19 08:48:28.206094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-02-19 08:48:28.206534 | orchestrator | 2025-02-19 08:48:28.206890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:28.207369 | orchestrator | Wednesday 19 February 2025 08:48:28 +0000 (0:00:00.500) 0:00:32.469 **** 2025-02-19 08:48:28.404314 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:28.405088 | orchestrator | 2025-02-19 08:48:28.405684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:28.406569 | orchestrator | Wednesday 19 February 2025 08:48:28 +0000 (0:00:00.208) 0:00:32.677 **** 2025-02-19 08:48:28.620224 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:28.620530 | orchestrator | 2025-02-19 08:48:28.621313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:28.622308 | orchestrator | Wednesday 19 February 2025 08:48:28 +0000 (0:00:00.216) 0:00:32.894 **** 2025-02-19 08:48:28.832462 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:28.833088 | orchestrator | 2025-02-19 08:48:28.833395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:28.836080 | orchestrator | Wednesday 19 February 2025 08:48:28 +0000 (0:00:00.210) 0:00:33.104 **** 2025-02-19 08:48:29.030101 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:29.031053 | orchestrator | 2025-02-19 08:48:29.031802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:29.032721 | orchestrator | Wednesday 19 February 2025 08:48:29 +0000 (0:00:00.198) 0:00:33.303 **** 2025-02-19 08:48:29.234777 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:29.234980 | orchestrator | 2025-02-19 08:48:29.237988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:29.238767 | orchestrator | Wednesday 19 February 2025 08:48:29 +0000 (0:00:00.204) 0:00:33.507 **** 2025-02-19 08:48:29.832562 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:29.833426 | orchestrator | 2025-02-19 08:48:29.836746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:30.033310 | orchestrator | Wednesday 19 February 2025 08:48:29 +0000 (0:00:00.597) 0:00:34.105 **** 2025-02-19 08:48:30.033527 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:30.033771 | orchestrator | 2025-02-19 08:48:30.034804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:30.035684 | orchestrator | Wednesday 19 February 2025 08:48:30 +0000 (0:00:00.201) 0:00:34.306 **** 2025-02-19 08:48:30.227893 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:30.228105 | orchestrator | 2025-02-19 08:48:30.228758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:30.229456 | orchestrator | Wednesday 19 February 2025 08:48:30 +0000 (0:00:00.194) 0:00:34.501 **** 2025-02-19 08:48:30.889246 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-02-19 08:48:30.890285 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-02-19 08:48:30.891976 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-02-19 08:48:30.893080 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-02-19 08:48:30.894160 | orchestrator | 2025-02-19 08:48:30.894898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:30.895307 | orchestrator | Wednesday 19 February 2025 08:48:30 +0000 (0:00:00.659) 0:00:35.161 **** 2025-02-19 08:48:31.094330 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:31.094502 | orchestrator | 2025-02-19 08:48:31.094968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:31.095733 | orchestrator | Wednesday 19 February 2025 08:48:31 +0000 (0:00:00.206) 0:00:35.368 **** 2025-02-19 08:48:31.302434 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:31.303038 | orchestrator | 2025-02-19 08:48:31.303783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:31.304735 | orchestrator | Wednesday 19 February 2025 08:48:31 +0000 (0:00:00.207) 0:00:35.575 **** 2025-02-19 08:48:31.526254 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:31.529505 | orchestrator | 2025-02-19 08:48:31.531095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:31.531160 | orchestrator | Wednesday 19 February 2025 08:48:31 +0000 (0:00:00.224) 0:00:35.800 **** 2025-02-19 08:48:31.726811 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:31.727013 | orchestrator | 2025-02-19 08:48:31.728355 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-19 08:48:31.731540 | orchestrator | Wednesday 19 February 2025 08:48:31 +0000 (0:00:00.199) 0:00:35.999 **** 2025-02-19 08:48:31.887009 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:31.887297 | orchestrator | 2025-02-19 08:48:31.888392 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-19 08:48:31.889348 | orchestrator | Wednesday 19 February 2025 08:48:31 +0000 (0:00:00.160) 0:00:36.159 **** 2025-02-19 08:48:32.111127 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '118242ed-6ea1-54c4-bfaa-1565dde441bc'}}) 2025-02-19 08:48:32.112063 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f77e8fc9-ceed-59c4-8328-4d335fb6ee54'}}) 2025-02-19 08:48:32.113984 | orchestrator | 2025-02-19 08:48:32.114228 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-19 08:48:32.114674 | orchestrator | Wednesday 19 February 2025 08:48:32 +0000 (0:00:00.226) 0:00:36.385 **** 2025-02-19 08:48:33.964786 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'}) 2025-02-19 08:48:33.967100 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'}) 2025-02-19 08:48:33.967818 | orchestrator | 2025-02-19 08:48:33.969717 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-19 08:48:33.970252 | orchestrator | Wednesday 19 February 2025 08:48:33 +0000 (0:00:01.850) 0:00:38.235 **** 2025-02-19 08:48:34.136481 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:34.138340 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:34.138674 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:34.142240 | orchestrator | 2025-02-19 08:48:34.142791 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-19 08:48:34.143848 | orchestrator | Wednesday 19 February 2025 08:48:34 +0000 (0:00:00.174) 0:00:38.410 **** 2025-02-19 08:48:35.483507 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'}) 2025-02-19 08:48:35.484606 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'}) 2025-02-19 08:48:35.484675 | orchestrator | 2025-02-19 08:48:35.485592 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-19 08:48:35.486210 | orchestrator | Wednesday 19 February 2025 08:48:35 +0000 (0:00:01.346) 0:00:39.756 **** 2025-02-19 08:48:35.674681 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:35.675498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:35.676874 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:35.677833 | orchestrator | 2025-02-19 08:48:35.678833 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-19 08:48:35.679878 | orchestrator | Wednesday 19 February 2025 08:48:35 +0000 (0:00:00.191) 0:00:39.948 **** 2025-02-19 08:48:35.838511 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:35.839721 | orchestrator | 2025-02-19 08:48:35.840881 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-19 08:48:35.841867 | orchestrator | Wednesday 19 February 2025 08:48:35 +0000 (0:00:00.163) 0:00:40.112 **** 2025-02-19 08:48:36.022341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:36.023427 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:36.024914 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:36.026167 | orchestrator | 2025-02-19 08:48:36.028129 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-19 08:48:36.029024 | orchestrator | Wednesday 19 February 2025 08:48:36 +0000 (0:00:00.181) 0:00:40.293 **** 2025-02-19 08:48:36.176219 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:36.179068 | orchestrator | 2025-02-19 08:48:36.179172 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-19 08:48:36.361221 | orchestrator | Wednesday 19 February 2025 08:48:36 +0000 (0:00:00.149) 0:00:40.443 **** 2025-02-19 08:48:36.361332 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:36.362875 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:36.364046 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:36.364730 | orchestrator | 2025-02-19 08:48:36.365917 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-19 08:48:36.366703 | orchestrator | Wednesday 19 February 2025 08:48:36 +0000 (0:00:00.190) 0:00:40.634 **** 2025-02-19 08:48:36.499756 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:36.501181 | orchestrator | 2025-02-19 08:48:36.502299 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-19 08:48:36.503484 | orchestrator | Wednesday 19 February 2025 08:48:36 +0000 (0:00:00.138) 0:00:40.773 **** 2025-02-19 08:48:36.669899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:36.670297 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:36.670720 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:36.670778 | orchestrator | 2025-02-19 08:48:36.671048 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-19 08:48:36.671585 | orchestrator | Wednesday 19 February 2025 08:48:36 +0000 (0:00:00.170) 0:00:40.944 **** 2025-02-19 08:48:37.025364 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:48:37.026598 | orchestrator | 2025-02-19 08:48:37.026916 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-19 08:48:37.028835 | orchestrator | Wednesday 19 February 2025 08:48:37 +0000 (0:00:00.353) 0:00:41.297 **** 2025-02-19 08:48:37.226603 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:37.227232 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:37.227710 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:37.228422 | orchestrator | 2025-02-19 08:48:37.229239 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-19 08:48:37.229908 | orchestrator | Wednesday 19 February 2025 08:48:37 +0000 (0:00:00.202) 0:00:41.499 **** 2025-02-19 08:48:37.416254 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:37.416972 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:37.417979 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:37.418002 | orchestrator | 2025-02-19 08:48:37.418426 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-19 08:48:37.419255 | orchestrator | Wednesday 19 February 2025 08:48:37 +0000 (0:00:00.189) 0:00:41.689 **** 2025-02-19 08:48:37.599171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:37.599563 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:37.599635 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:37.601327 | orchestrator | 2025-02-19 08:48:37.601547 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-19 08:48:37.601912 | orchestrator | Wednesday 19 February 2025 08:48:37 +0000 (0:00:00.182) 0:00:41.872 **** 2025-02-19 08:48:37.743118 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:37.743326 | orchestrator | 2025-02-19 08:48:37.744360 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-19 08:48:37.750954 | orchestrator | Wednesday 19 February 2025 08:48:37 +0000 (0:00:00.143) 0:00:42.016 **** 2025-02-19 08:48:37.885506 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:37.887192 | orchestrator | 2025-02-19 08:48:37.887633 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-19 08:48:37.887714 | orchestrator | Wednesday 19 February 2025 08:48:37 +0000 (0:00:00.142) 0:00:42.158 **** 2025-02-19 08:48:38.033065 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:38.034921 | orchestrator | 2025-02-19 08:48:38.035165 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-19 08:48:38.035191 | orchestrator | Wednesday 19 February 2025 08:48:38 +0000 (0:00:00.147) 0:00:42.306 **** 2025-02-19 08:48:38.185635 | orchestrator | ok: [testbed-node-4] => { 2025-02-19 08:48:38.185908 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-19 08:48:38.186199 | orchestrator | } 2025-02-19 08:48:38.187334 | orchestrator | 2025-02-19 08:48:38.187454 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-19 08:48:38.187820 | orchestrator | Wednesday 19 February 2025 08:48:38 +0000 (0:00:00.153) 0:00:42.459 **** 2025-02-19 08:48:38.354610 | orchestrator | ok: [testbed-node-4] => { 2025-02-19 08:48:38.354836 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-19 08:48:38.355223 | orchestrator | } 2025-02-19 08:48:38.355929 | orchestrator | 2025-02-19 08:48:38.356489 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-19 08:48:38.357779 | orchestrator | Wednesday 19 February 2025 08:48:38 +0000 (0:00:00.168) 0:00:42.627 **** 2025-02-19 08:48:38.511414 | orchestrator | ok: [testbed-node-4] => { 2025-02-19 08:48:38.511895 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-19 08:48:38.512575 | orchestrator | } 2025-02-19 08:48:38.513123 | orchestrator | 2025-02-19 08:48:38.513844 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-19 08:48:38.516594 | orchestrator | Wednesday 19 February 2025 08:48:38 +0000 (0:00:00.157) 0:00:42.785 **** 2025-02-19 08:48:39.071387 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:48:39.072298 | orchestrator | 2025-02-19 08:48:39.073073 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-19 08:48:39.074002 | orchestrator | Wednesday 19 February 2025 08:48:39 +0000 (0:00:00.559) 0:00:43.344 **** 2025-02-19 08:48:39.664811 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:48:39.665000 | orchestrator | 2025-02-19 08:48:39.665551 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-19 08:48:39.666221 | orchestrator | Wednesday 19 February 2025 08:48:39 +0000 (0:00:00.593) 0:00:43.938 **** 2025-02-19 08:48:40.363021 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:48:40.364001 | orchestrator | 2025-02-19 08:48:40.364683 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-19 08:48:40.365217 | orchestrator | Wednesday 19 February 2025 08:48:40 +0000 (0:00:00.697) 0:00:44.635 **** 2025-02-19 08:48:40.498680 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:48:40.499376 | orchestrator | 2025-02-19 08:48:40.500219 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-19 08:48:40.501371 | orchestrator | Wednesday 19 February 2025 08:48:40 +0000 (0:00:00.136) 0:00:44.772 **** 2025-02-19 08:48:40.619019 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:40.620070 | orchestrator | 2025-02-19 08:48:40.621109 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-19 08:48:40.622481 | orchestrator | Wednesday 19 February 2025 08:48:40 +0000 (0:00:00.120) 0:00:44.892 **** 2025-02-19 08:48:40.742510 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:40.743730 | orchestrator | 2025-02-19 08:48:40.744756 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-19 08:48:40.745686 | orchestrator | Wednesday 19 February 2025 08:48:40 +0000 (0:00:00.122) 0:00:45.015 **** 2025-02-19 08:48:40.892757 | orchestrator | ok: [testbed-node-4] => { 2025-02-19 08:48:40.895566 | orchestrator |  "vgs_report": { 2025-02-19 08:48:40.896137 | orchestrator |  "vg": [] 2025-02-19 08:48:40.897178 | orchestrator |  } 2025-02-19 08:48:40.898065 | orchestrator | } 2025-02-19 08:48:40.898526 | orchestrator | 2025-02-19 08:48:40.899148 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-19 08:48:40.899691 | orchestrator | Wednesday 19 February 2025 08:48:40 +0000 (0:00:00.151) 0:00:45.166 **** 2025-02-19 08:48:41.028382 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:41.028915 | orchestrator | 2025-02-19 08:48:41.029145 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-19 08:48:41.029232 | orchestrator | Wednesday 19 February 2025 08:48:41 +0000 (0:00:00.135) 0:00:45.302 **** 2025-02-19 08:48:41.201177 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:41.201746 | orchestrator | 2025-02-19 08:48:41.202769 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-19 08:48:41.204066 | orchestrator | Wednesday 19 February 2025 08:48:41 +0000 (0:00:00.171) 0:00:45.473 **** 2025-02-19 08:48:41.353031 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:41.354371 | orchestrator | 2025-02-19 08:48:41.354882 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-19 08:48:41.355751 | orchestrator | Wednesday 19 February 2025 08:48:41 +0000 (0:00:00.153) 0:00:45.626 **** 2025-02-19 08:48:41.495273 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:41.495501 | orchestrator | 2025-02-19 08:48:41.497165 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-19 08:48:41.498240 | orchestrator | Wednesday 19 February 2025 08:48:41 +0000 (0:00:00.142) 0:00:45.769 **** 2025-02-19 08:48:41.643392 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:41.648453 | orchestrator | 2025-02-19 08:48:41.650073 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-19 08:48:41.650588 | orchestrator | Wednesday 19 February 2025 08:48:41 +0000 (0:00:00.147) 0:00:45.916 **** 2025-02-19 08:48:41.783152 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:41.783337 | orchestrator | 2025-02-19 08:48:41.785132 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-19 08:48:41.787900 | orchestrator | Wednesday 19 February 2025 08:48:41 +0000 (0:00:00.138) 0:00:46.055 **** 2025-02-19 08:48:41.928438 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:41.929711 | orchestrator | 2025-02-19 08:48:41.930610 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-19 08:48:41.931178 | orchestrator | Wednesday 19 February 2025 08:48:41 +0000 (0:00:00.146) 0:00:46.201 **** 2025-02-19 08:48:42.279080 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:42.279318 | orchestrator | 2025-02-19 08:48:42.279347 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-19 08:48:42.280280 | orchestrator | Wednesday 19 February 2025 08:48:42 +0000 (0:00:00.347) 0:00:46.549 **** 2025-02-19 08:48:42.425797 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:42.426129 | orchestrator | 2025-02-19 08:48:42.426610 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-19 08:48:42.427331 | orchestrator | Wednesday 19 February 2025 08:48:42 +0000 (0:00:00.149) 0:00:46.699 **** 2025-02-19 08:48:42.569490 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:42.571588 | orchestrator | 2025-02-19 08:48:42.572200 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-19 08:48:42.573057 | orchestrator | Wednesday 19 February 2025 08:48:42 +0000 (0:00:00.142) 0:00:46.841 **** 2025-02-19 08:48:42.700715 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:42.701123 | orchestrator | 2025-02-19 08:48:42.702485 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-19 08:48:42.704233 | orchestrator | Wednesday 19 February 2025 08:48:42 +0000 (0:00:00.130) 0:00:46.972 **** 2025-02-19 08:48:42.852705 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:42.853861 | orchestrator | 2025-02-19 08:48:42.856820 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-19 08:48:42.857399 | orchestrator | Wednesday 19 February 2025 08:48:42 +0000 (0:00:00.152) 0:00:47.125 **** 2025-02-19 08:48:42.998888 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:42.999084 | orchestrator | 2025-02-19 08:48:42.999870 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-19 08:48:43.002603 | orchestrator | Wednesday 19 February 2025 08:48:42 +0000 (0:00:00.146) 0:00:47.271 **** 2025-02-19 08:48:43.167187 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:43.351496 | orchestrator | 2025-02-19 08:48:43.351613 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-19 08:48:43.351633 | orchestrator | Wednesday 19 February 2025 08:48:43 +0000 (0:00:00.167) 0:00:47.439 **** 2025-02-19 08:48:43.351710 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:43.352851 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:43.353531 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:43.354512 | orchestrator | 2025-02-19 08:48:43.355323 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-19 08:48:43.356328 | orchestrator | Wednesday 19 February 2025 08:48:43 +0000 (0:00:00.186) 0:00:47.625 **** 2025-02-19 08:48:43.608245 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:43.608704 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:43.609898 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:43.610690 | orchestrator | 2025-02-19 08:48:43.611826 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-19 08:48:43.613318 | orchestrator | Wednesday 19 February 2025 08:48:43 +0000 (0:00:00.255) 0:00:47.881 **** 2025-02-19 08:48:43.807387 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:43.807574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:43.807605 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:43.808276 | orchestrator | 2025-02-19 08:48:43.808957 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-19 08:48:43.809503 | orchestrator | Wednesday 19 February 2025 08:48:43 +0000 (0:00:00.198) 0:00:48.079 **** 2025-02-19 08:48:43.979113 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:43.980419 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:43.980799 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:43.981926 | orchestrator | 2025-02-19 08:48:43.982923 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-19 08:48:43.983689 | orchestrator | Wednesday 19 February 2025 08:48:43 +0000 (0:00:00.173) 0:00:48.252 **** 2025-02-19 08:48:44.154573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:44.156006 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:44.156454 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:44.158425 | orchestrator | 2025-02-19 08:48:44.159632 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-19 08:48:44.161363 | orchestrator | Wednesday 19 February 2025 08:48:44 +0000 (0:00:00.174) 0:00:48.427 **** 2025-02-19 08:48:44.524289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:44.525632 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:44.526267 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:44.530537 | orchestrator | 2025-02-19 08:48:44.701442 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-19 08:48:44.701564 | orchestrator | Wednesday 19 February 2025 08:48:44 +0000 (0:00:00.370) 0:00:48.798 **** 2025-02-19 08:48:44.701601 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:44.702084 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:44.703704 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:44.704545 | orchestrator | 2025-02-19 08:48:44.706889 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-19 08:48:44.708327 | orchestrator | Wednesday 19 February 2025 08:48:44 +0000 (0:00:00.176) 0:00:48.974 **** 2025-02-19 08:48:44.892106 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:44.892335 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:44.892801 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:44.896678 | orchestrator | 2025-02-19 08:48:45.449547 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-19 08:48:45.449634 | orchestrator | Wednesday 19 February 2025 08:48:44 +0000 (0:00:00.190) 0:00:49.165 **** 2025-02-19 08:48:45.449684 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:48:45.451586 | orchestrator | 2025-02-19 08:48:45.452983 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-19 08:48:45.454635 | orchestrator | Wednesday 19 February 2025 08:48:45 +0000 (0:00:00.556) 0:00:49.722 **** 2025-02-19 08:48:45.969859 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:48:45.970706 | orchestrator | 2025-02-19 08:48:45.971088 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-19 08:48:45.971356 | orchestrator | Wednesday 19 February 2025 08:48:45 +0000 (0:00:00.522) 0:00:50.244 **** 2025-02-19 08:48:46.123886 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:48:46.125810 | orchestrator | 2025-02-19 08:48:46.129186 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-19 08:48:46.129219 | orchestrator | Wednesday 19 February 2025 08:48:46 +0000 (0:00:00.151) 0:00:50.395 **** 2025-02-19 08:48:46.320629 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'vg_name': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'}) 2025-02-19 08:48:46.321099 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'vg_name': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'}) 2025-02-19 08:48:46.321636 | orchestrator | 2025-02-19 08:48:46.322529 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-19 08:48:46.323043 | orchestrator | Wednesday 19 February 2025 08:48:46 +0000 (0:00:00.197) 0:00:50.593 **** 2025-02-19 08:48:46.534499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:46.535382 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:46.536776 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:46.538378 | orchestrator | 2025-02-19 08:48:46.540113 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-19 08:48:46.541743 | orchestrator | Wednesday 19 February 2025 08:48:46 +0000 (0:00:00.214) 0:00:50.807 **** 2025-02-19 08:48:46.718356 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:46.719441 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:46.721024 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:46.721488 | orchestrator | 2025-02-19 08:48:46.722282 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-19 08:48:46.723436 | orchestrator | Wednesday 19 February 2025 08:48:46 +0000 (0:00:00.183) 0:00:50.991 **** 2025-02-19 08:48:46.914779 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'})  2025-02-19 08:48:46.918891 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'})  2025-02-19 08:48:46.919004 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:48:46.919217 | orchestrator | 2025-02-19 08:48:46.921069 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-19 08:48:46.921727 | orchestrator | Wednesday 19 February 2025 08:48:46 +0000 (0:00:00.193) 0:00:51.185 **** 2025-02-19 08:48:47.798289 | orchestrator | ok: [testbed-node-4] => { 2025-02-19 08:48:47.800891 | orchestrator |  "lvm_report": { 2025-02-19 08:48:47.804445 | orchestrator |  "lv": [ 2025-02-19 08:48:47.805229 | orchestrator |  { 2025-02-19 08:48:47.805267 | orchestrator |  "lv_name": "osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc", 2025-02-19 08:48:47.805284 | orchestrator |  "vg_name": "ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc" 2025-02-19 08:48:47.805330 | orchestrator |  }, 2025-02-19 08:48:47.805354 | orchestrator |  { 2025-02-19 08:48:47.805418 | orchestrator |  "lv_name": "osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54", 2025-02-19 08:48:47.806012 | orchestrator |  "vg_name": "ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54" 2025-02-19 08:48:47.806180 | orchestrator |  } 2025-02-19 08:48:47.806821 | orchestrator |  ], 2025-02-19 08:48:47.807069 | orchestrator |  "pv": [ 2025-02-19 08:48:47.807452 | orchestrator |  { 2025-02-19 08:48:47.807796 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-19 08:48:47.808367 | orchestrator |  "vg_name": "ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc" 2025-02-19 08:48:47.808848 | orchestrator |  }, 2025-02-19 08:48:47.809213 | orchestrator |  { 2025-02-19 08:48:47.809707 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-19 08:48:47.809982 | orchestrator |  "vg_name": "ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54" 2025-02-19 08:48:47.810355 | orchestrator |  } 2025-02-19 08:48:47.810782 | orchestrator |  ] 2025-02-19 08:48:47.811276 | orchestrator |  } 2025-02-19 08:48:47.811502 | orchestrator | } 2025-02-19 08:48:47.811890 | orchestrator | 2025-02-19 08:48:47.812102 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-19 08:48:47.812400 | orchestrator | 2025-02-19 08:48:47.813173 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-19 08:48:47.814122 | orchestrator | Wednesday 19 February 2025 08:48:47 +0000 (0:00:00.885) 0:00:52.070 **** 2025-02-19 08:48:48.064016 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-19 08:48:48.065137 | orchestrator | 2025-02-19 08:48:48.065953 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-19 08:48:48.066879 | orchestrator | Wednesday 19 February 2025 08:48:48 +0000 (0:00:00.265) 0:00:52.336 **** 2025-02-19 08:48:48.328210 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:48:48.328730 | orchestrator | 2025-02-19 08:48:48.328953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:48.329527 | orchestrator | Wednesday 19 February 2025 08:48:48 +0000 (0:00:00.263) 0:00:52.600 **** 2025-02-19 08:48:48.793961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-02-19 08:48:48.794541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-02-19 08:48:48.795376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-02-19 08:48:48.795915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-02-19 08:48:48.798478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-02-19 08:48:48.799027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-02-19 08:48:48.799816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-02-19 08:48:48.800468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-02-19 08:48:48.800821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-02-19 08:48:48.801337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-02-19 08:48:48.801878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-02-19 08:48:48.802799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-02-19 08:48:48.803635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-02-19 08:48:48.804382 | orchestrator | 2025-02-19 08:48:48.804862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:48.805819 | orchestrator | Wednesday 19 February 2025 08:48:48 +0000 (0:00:00.467) 0:00:53.067 **** 2025-02-19 08:48:49.370446 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:49.371111 | orchestrator | 2025-02-19 08:48:49.372095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:49.374312 | orchestrator | Wednesday 19 February 2025 08:48:49 +0000 (0:00:00.575) 0:00:53.642 **** 2025-02-19 08:48:49.577403 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:49.578546 | orchestrator | 2025-02-19 08:48:49.578713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:49.579351 | orchestrator | Wednesday 19 February 2025 08:48:49 +0000 (0:00:00.207) 0:00:53.850 **** 2025-02-19 08:48:49.781051 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:49.781281 | orchestrator | 2025-02-19 08:48:49.781952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:49.782676 | orchestrator | Wednesday 19 February 2025 08:48:49 +0000 (0:00:00.204) 0:00:54.054 **** 2025-02-19 08:48:50.007236 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:50.007582 | orchestrator | 2025-02-19 08:48:50.007618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:50.007689 | orchestrator | Wednesday 19 February 2025 08:48:49 +0000 (0:00:00.221) 0:00:54.276 **** 2025-02-19 08:48:50.207989 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:50.210780 | orchestrator | 2025-02-19 08:48:50.212178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:50.411448 | orchestrator | Wednesday 19 February 2025 08:48:50 +0000 (0:00:00.202) 0:00:54.479 **** 2025-02-19 08:48:50.411615 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:50.414866 | orchestrator | 2025-02-19 08:48:50.628948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:50.629047 | orchestrator | Wednesday 19 February 2025 08:48:50 +0000 (0:00:00.204) 0:00:54.683 **** 2025-02-19 08:48:50.629071 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:50.629408 | orchestrator | 2025-02-19 08:48:50.629428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:50.630220 | orchestrator | Wednesday 19 February 2025 08:48:50 +0000 (0:00:00.215) 0:00:54.898 **** 2025-02-19 08:48:50.856474 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:50.857028 | orchestrator | 2025-02-19 08:48:50.858233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:50.858571 | orchestrator | Wednesday 19 February 2025 08:48:50 +0000 (0:00:00.230) 0:00:55.129 **** 2025-02-19 08:48:51.302949 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb) 2025-02-19 08:48:51.303329 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb) 2025-02-19 08:48:51.304848 | orchestrator | 2025-02-19 08:48:51.305150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:51.307529 | orchestrator | Wednesday 19 February 2025 08:48:51 +0000 (0:00:00.446) 0:00:55.576 **** 2025-02-19 08:48:52.014377 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_eb5d754e-727a-4983-9d71-2a65afff7a52) 2025-02-19 08:48:52.016635 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_eb5d754e-727a-4983-9d71-2a65afff7a52) 2025-02-19 08:48:52.017718 | orchestrator | 2025-02-19 08:48:52.017882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:52.018489 | orchestrator | Wednesday 19 February 2025 08:48:52 +0000 (0:00:00.707) 0:00:56.283 **** 2025-02-19 08:48:52.920927 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_00a01370-945d-463a-a32d-5e52b5234eb4) 2025-02-19 08:48:52.921084 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_00a01370-945d-463a-a32d-5e52b5234eb4) 2025-02-19 08:48:52.921107 | orchestrator | 2025-02-19 08:48:52.921142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:52.921175 | orchestrator | Wednesday 19 February 2025 08:48:52 +0000 (0:00:00.907) 0:00:57.190 **** 2025-02-19 08:48:53.387353 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_933f95c9-b090-4d95-b9b7-90a087e62286) 2025-02-19 08:48:53.387523 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_933f95c9-b090-4d95-b9b7-90a087e62286) 2025-02-19 08:48:53.387544 | orchestrator | 2025-02-19 08:48:53.387561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-19 08:48:53.387800 | orchestrator | Wednesday 19 February 2025 08:48:53 +0000 (0:00:00.469) 0:00:57.660 **** 2025-02-19 08:48:53.756765 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-19 08:48:53.756917 | orchestrator | 2025-02-19 08:48:53.759286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:53.759306 | orchestrator | Wednesday 19 February 2025 08:48:53 +0000 (0:00:00.369) 0:00:58.030 **** 2025-02-19 08:48:54.303395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-02-19 08:48:54.303809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-02-19 08:48:54.304306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-02-19 08:48:54.304802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-02-19 08:48:54.305485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-02-19 08:48:54.306479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-02-19 08:48:54.306828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-02-19 08:48:54.307354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-02-19 08:48:54.308484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-02-19 08:48:54.309046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-02-19 08:48:54.309753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-02-19 08:48:54.310560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-02-19 08:48:54.311264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-02-19 08:48:54.311768 | orchestrator | 2025-02-19 08:48:54.312347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:54.313960 | orchestrator | Wednesday 19 February 2025 08:48:54 +0000 (0:00:00.546) 0:00:58.576 **** 2025-02-19 08:48:54.504750 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:54.506164 | orchestrator | 2025-02-19 08:48:54.506266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:54.506296 | orchestrator | Wednesday 19 February 2025 08:48:54 +0000 (0:00:00.198) 0:00:58.775 **** 2025-02-19 08:48:54.695238 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:54.696268 | orchestrator | 2025-02-19 08:48:54.697176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:54.698269 | orchestrator | Wednesday 19 February 2025 08:48:54 +0000 (0:00:00.192) 0:00:58.968 **** 2025-02-19 08:48:54.912412 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:54.912883 | orchestrator | 2025-02-19 08:48:54.913865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:54.914391 | orchestrator | Wednesday 19 February 2025 08:48:54 +0000 (0:00:00.216) 0:00:59.184 **** 2025-02-19 08:48:55.120914 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:55.121749 | orchestrator | 2025-02-19 08:48:55.122846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:55.123414 | orchestrator | Wednesday 19 February 2025 08:48:55 +0000 (0:00:00.210) 0:00:59.395 **** 2025-02-19 08:48:55.322124 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:55.323639 | orchestrator | 2025-02-19 08:48:55.323711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:55.326271 | orchestrator | Wednesday 19 February 2025 08:48:55 +0000 (0:00:00.199) 0:00:59.594 **** 2025-02-19 08:48:55.550130 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:55.550522 | orchestrator | 2025-02-19 08:48:55.550910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:55.551528 | orchestrator | Wednesday 19 February 2025 08:48:55 +0000 (0:00:00.228) 0:00:59.823 **** 2025-02-19 08:48:56.073956 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:56.074618 | orchestrator | 2025-02-19 08:48:56.075855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:56.076401 | orchestrator | Wednesday 19 February 2025 08:48:56 +0000 (0:00:00.523) 0:01:00.347 **** 2025-02-19 08:48:56.294349 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:56.294705 | orchestrator | 2025-02-19 08:48:56.295172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:56.295936 | orchestrator | Wednesday 19 February 2025 08:48:56 +0000 (0:00:00.221) 0:01:00.568 **** 2025-02-19 08:48:57.001240 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-02-19 08:48:57.001887 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-02-19 08:48:57.002080 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-02-19 08:48:57.002119 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-02-19 08:48:57.002199 | orchestrator | 2025-02-19 08:48:57.002840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:57.003214 | orchestrator | Wednesday 19 February 2025 08:48:56 +0000 (0:00:00.705) 0:01:01.274 **** 2025-02-19 08:48:57.216666 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:57.216876 | orchestrator | 2025-02-19 08:48:57.217171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:57.217740 | orchestrator | Wednesday 19 February 2025 08:48:57 +0000 (0:00:00.215) 0:01:01.489 **** 2025-02-19 08:48:57.428037 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:57.631863 | orchestrator | 2025-02-19 08:48:57.631989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:57.632000 | orchestrator | Wednesday 19 February 2025 08:48:57 +0000 (0:00:00.211) 0:01:01.700 **** 2025-02-19 08:48:57.632018 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:57.632055 | orchestrator | 2025-02-19 08:48:57.632064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-19 08:48:57.857915 | orchestrator | Wednesday 19 February 2025 08:48:57 +0000 (0:00:00.203) 0:01:01.904 **** 2025-02-19 08:48:57.858139 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:57.858306 | orchestrator | 2025-02-19 08:48:57.859292 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-19 08:48:57.860722 | orchestrator | Wednesday 19 February 2025 08:48:57 +0000 (0:00:00.226) 0:01:02.130 **** 2025-02-19 08:48:57.998190 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:48:57.998573 | orchestrator | 2025-02-19 08:48:57.999206 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-19 08:48:58.000372 | orchestrator | Wednesday 19 February 2025 08:48:57 +0000 (0:00:00.141) 0:01:02.271 **** 2025-02-19 08:48:58.240149 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '45b4b457-0c8f-5565-8330-30b761ce6399'}}) 2025-02-19 08:48:58.240317 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '185b0f4c-91cb-52bd-aac1-e01f69de71f3'}}) 2025-02-19 08:48:58.241969 | orchestrator | 2025-02-19 08:48:58.242486 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-19 08:48:58.243374 | orchestrator | Wednesday 19 February 2025 08:48:58 +0000 (0:00:00.241) 0:01:02.513 **** 2025-02-19 08:49:00.249972 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'}) 2025-02-19 08:49:00.250794 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'}) 2025-02-19 08:49:00.253206 | orchestrator | 2025-02-19 08:49:00.253282 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-19 08:49:00.434283 | orchestrator | Wednesday 19 February 2025 08:49:00 +0000 (0:00:02.008) 0:01:04.522 **** 2025-02-19 08:49:00.434521 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:00.434677 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:00.435730 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:00.436382 | orchestrator | 2025-02-19 08:49:00.436984 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-19 08:49:00.437491 | orchestrator | Wednesday 19 February 2025 08:49:00 +0000 (0:00:00.185) 0:01:04.707 **** 2025-02-19 08:49:01.796963 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'}) 2025-02-19 08:49:01.797140 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'}) 2025-02-19 08:49:01.797417 | orchestrator | 2025-02-19 08:49:01.797532 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-19 08:49:01.800966 | orchestrator | Wednesday 19 February 2025 08:49:01 +0000 (0:00:01.360) 0:01:06.067 **** 2025-02-19 08:49:01.965857 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:01.966329 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:01.966953 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:01.967605 | orchestrator | 2025-02-19 08:49:01.967967 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-19 08:49:01.968457 | orchestrator | Wednesday 19 February 2025 08:49:01 +0000 (0:00:00.172) 0:01:06.240 **** 2025-02-19 08:49:02.099142 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:02.099941 | orchestrator | 2025-02-19 08:49:02.101463 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-19 08:49:02.105595 | orchestrator | Wednesday 19 February 2025 08:49:02 +0000 (0:00:00.132) 0:01:06.373 **** 2025-02-19 08:49:02.280421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:02.281351 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:02.281437 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:02.282010 | orchestrator | 2025-02-19 08:49:02.282721 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-19 08:49:02.285844 | orchestrator | Wednesday 19 February 2025 08:49:02 +0000 (0:00:00.180) 0:01:06.553 **** 2025-02-19 08:49:02.423526 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:02.424050 | orchestrator | 2025-02-19 08:49:02.425086 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-19 08:49:02.427025 | orchestrator | Wednesday 19 February 2025 08:49:02 +0000 (0:00:00.142) 0:01:06.696 **** 2025-02-19 08:49:02.650433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:02.652873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:02.653871 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:02.654823 | orchestrator | 2025-02-19 08:49:02.655506 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-19 08:49:02.656391 | orchestrator | Wednesday 19 February 2025 08:49:02 +0000 (0:00:00.226) 0:01:06.923 **** 2025-02-19 08:49:02.791210 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:02.959040 | orchestrator | 2025-02-19 08:49:02.959151 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-19 08:49:02.959171 | orchestrator | Wednesday 19 February 2025 08:49:02 +0000 (0:00:00.141) 0:01:07.064 **** 2025-02-19 08:49:02.959203 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:02.959383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:02.959420 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:02.960350 | orchestrator | 2025-02-19 08:49:02.960994 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-19 08:49:02.961977 | orchestrator | Wednesday 19 February 2025 08:49:02 +0000 (0:00:00.166) 0:01:07.231 **** 2025-02-19 08:49:03.110116 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:49:03.111478 | orchestrator | 2025-02-19 08:49:03.112928 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-19 08:49:03.114224 | orchestrator | Wednesday 19 February 2025 08:49:03 +0000 (0:00:00.151) 0:01:07.382 **** 2025-02-19 08:49:03.486798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:03.488274 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:03.489458 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:03.490153 | orchestrator | 2025-02-19 08:49:03.491634 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-19 08:49:03.492270 | orchestrator | Wednesday 19 February 2025 08:49:03 +0000 (0:00:00.374) 0:01:07.757 **** 2025-02-19 08:49:03.683185 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:03.683971 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:03.684406 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:03.686519 | orchestrator | 2025-02-19 08:49:03.856247 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-19 08:49:03.856384 | orchestrator | Wednesday 19 February 2025 08:49:03 +0000 (0:00:00.197) 0:01:07.955 **** 2025-02-19 08:49:03.856411 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:03.856473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:03.856987 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:03.857871 | orchestrator | 2025-02-19 08:49:03.858320 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-19 08:49:03.863497 | orchestrator | Wednesday 19 February 2025 08:49:03 +0000 (0:00:00.174) 0:01:08.129 **** 2025-02-19 08:49:03.999422 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:04.000631 | orchestrator | 2025-02-19 08:49:04.001243 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-19 08:49:04.004291 | orchestrator | Wednesday 19 February 2025 08:49:03 +0000 (0:00:00.143) 0:01:08.272 **** 2025-02-19 08:49:04.150946 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:04.151545 | orchestrator | 2025-02-19 08:49:04.152377 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-19 08:49:04.153211 | orchestrator | Wednesday 19 February 2025 08:49:04 +0000 (0:00:00.151) 0:01:08.424 **** 2025-02-19 08:49:04.303253 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:04.303548 | orchestrator | 2025-02-19 08:49:04.304138 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-19 08:49:04.304882 | orchestrator | Wednesday 19 February 2025 08:49:04 +0000 (0:00:00.152) 0:01:08.576 **** 2025-02-19 08:49:04.453710 | orchestrator | ok: [testbed-node-5] => { 2025-02-19 08:49:04.454538 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-19 08:49:04.456906 | orchestrator | } 2025-02-19 08:49:04.457095 | orchestrator | 2025-02-19 08:49:04.457126 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-19 08:49:04.458176 | orchestrator | Wednesday 19 February 2025 08:49:04 +0000 (0:00:00.148) 0:01:08.725 **** 2025-02-19 08:49:04.601315 | orchestrator | ok: [testbed-node-5] => { 2025-02-19 08:49:04.602128 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-19 08:49:04.603088 | orchestrator | } 2025-02-19 08:49:04.603695 | orchestrator | 2025-02-19 08:49:04.604314 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-19 08:49:04.605257 | orchestrator | Wednesday 19 February 2025 08:49:04 +0000 (0:00:00.145) 0:01:08.871 **** 2025-02-19 08:49:04.745485 | orchestrator | ok: [testbed-node-5] => { 2025-02-19 08:49:04.745746 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-19 08:49:04.746205 | orchestrator | } 2025-02-19 08:49:04.746970 | orchestrator | 2025-02-19 08:49:04.747598 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-19 08:49:04.748398 | orchestrator | Wednesday 19 February 2025 08:49:04 +0000 (0:00:00.146) 0:01:09.018 **** 2025-02-19 08:49:05.267355 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:49:05.267680 | orchestrator | 2025-02-19 08:49:05.268893 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-19 08:49:05.269535 | orchestrator | Wednesday 19 February 2025 08:49:05 +0000 (0:00:00.521) 0:01:09.540 **** 2025-02-19 08:49:05.838939 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:49:05.839141 | orchestrator | 2025-02-19 08:49:05.839800 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-19 08:49:05.840369 | orchestrator | Wednesday 19 February 2025 08:49:05 +0000 (0:00:00.570) 0:01:10.110 **** 2025-02-19 08:49:06.581258 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:49:06.581969 | orchestrator | 2025-02-19 08:49:06.582585 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-19 08:49:06.584843 | orchestrator | Wednesday 19 February 2025 08:49:06 +0000 (0:00:00.743) 0:01:10.853 **** 2025-02-19 08:49:06.722350 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:49:06.722709 | orchestrator | 2025-02-19 08:49:06.722757 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-19 08:49:06.723089 | orchestrator | Wednesday 19 February 2025 08:49:06 +0000 (0:00:00.142) 0:01:10.996 **** 2025-02-19 08:49:06.836420 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:06.837679 | orchestrator | 2025-02-19 08:49:06.838203 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-19 08:49:06.839122 | orchestrator | Wednesday 19 February 2025 08:49:06 +0000 (0:00:00.114) 0:01:11.110 **** 2025-02-19 08:49:06.950938 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:06.951068 | orchestrator | 2025-02-19 08:49:06.951739 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-19 08:49:06.952501 | orchestrator | Wednesday 19 February 2025 08:49:06 +0000 (0:00:00.114) 0:01:11.224 **** 2025-02-19 08:49:07.086833 | orchestrator | ok: [testbed-node-5] => { 2025-02-19 08:49:07.087848 | orchestrator |  "vgs_report": { 2025-02-19 08:49:07.089059 | orchestrator |  "vg": [] 2025-02-19 08:49:07.090198 | orchestrator |  } 2025-02-19 08:49:07.090533 | orchestrator | } 2025-02-19 08:49:07.091245 | orchestrator | 2025-02-19 08:49:07.091822 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-19 08:49:07.092281 | orchestrator | Wednesday 19 February 2025 08:49:07 +0000 (0:00:00.134) 0:01:11.359 **** 2025-02-19 08:49:07.229349 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:07.229845 | orchestrator | 2025-02-19 08:49:07.230722 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-19 08:49:07.231343 | orchestrator | Wednesday 19 February 2025 08:49:07 +0000 (0:00:00.143) 0:01:11.502 **** 2025-02-19 08:49:07.371090 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:07.371287 | orchestrator | 2025-02-19 08:49:07.371939 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-19 08:49:07.372401 | orchestrator | Wednesday 19 February 2025 08:49:07 +0000 (0:00:00.141) 0:01:11.644 **** 2025-02-19 08:49:07.511226 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:07.511428 | orchestrator | 2025-02-19 08:49:07.511791 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-19 08:49:07.512139 | orchestrator | Wednesday 19 February 2025 08:49:07 +0000 (0:00:00.138) 0:01:11.783 **** 2025-02-19 08:49:07.643333 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:07.644568 | orchestrator | 2025-02-19 08:49:07.645290 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-19 08:49:07.646285 | orchestrator | Wednesday 19 February 2025 08:49:07 +0000 (0:00:00.131) 0:01:11.914 **** 2025-02-19 08:49:07.799061 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:07.799374 | orchestrator | 2025-02-19 08:49:07.804320 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-19 08:49:07.806085 | orchestrator | Wednesday 19 February 2025 08:49:07 +0000 (0:00:00.156) 0:01:12.071 **** 2025-02-19 08:49:07.949446 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:07.951151 | orchestrator | 2025-02-19 08:49:07.951799 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-19 08:49:07.952415 | orchestrator | Wednesday 19 February 2025 08:49:07 +0000 (0:00:00.150) 0:01:12.222 **** 2025-02-19 08:49:08.092620 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:08.093033 | orchestrator | 2025-02-19 08:49:08.094783 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-19 08:49:08.096225 | orchestrator | Wednesday 19 February 2025 08:49:08 +0000 (0:00:00.143) 0:01:12.366 **** 2025-02-19 08:49:08.441053 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:08.442139 | orchestrator | 2025-02-19 08:49:08.442210 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-19 08:49:08.442978 | orchestrator | Wednesday 19 February 2025 08:49:08 +0000 (0:00:00.347) 0:01:12.714 **** 2025-02-19 08:49:08.591438 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:08.591698 | orchestrator | 2025-02-19 08:49:08.592669 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-19 08:49:08.593846 | orchestrator | Wednesday 19 February 2025 08:49:08 +0000 (0:00:00.150) 0:01:12.864 **** 2025-02-19 08:49:08.753472 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:08.754770 | orchestrator | 2025-02-19 08:49:08.755971 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-19 08:49:08.756761 | orchestrator | Wednesday 19 February 2025 08:49:08 +0000 (0:00:00.162) 0:01:13.026 **** 2025-02-19 08:49:08.889737 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:08.890245 | orchestrator | 2025-02-19 08:49:08.891121 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-19 08:49:08.892136 | orchestrator | Wednesday 19 February 2025 08:49:08 +0000 (0:00:00.135) 0:01:13.162 **** 2025-02-19 08:49:09.035201 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:09.035891 | orchestrator | 2025-02-19 08:49:09.037849 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-19 08:49:09.038824 | orchestrator | Wednesday 19 February 2025 08:49:09 +0000 (0:00:00.145) 0:01:13.308 **** 2025-02-19 08:49:09.181245 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:09.182308 | orchestrator | 2025-02-19 08:49:09.182350 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-19 08:49:09.328779 | orchestrator | Wednesday 19 February 2025 08:49:09 +0000 (0:00:00.145) 0:01:13.453 **** 2025-02-19 08:49:09.328899 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:09.330294 | orchestrator | 2025-02-19 08:49:09.330325 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-19 08:49:09.502262 | orchestrator | Wednesday 19 February 2025 08:49:09 +0000 (0:00:00.145) 0:01:13.598 **** 2025-02-19 08:49:09.502390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:09.503720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:09.505136 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:09.506577 | orchestrator | 2025-02-19 08:49:09.507305 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-19 08:49:09.508119 | orchestrator | Wednesday 19 February 2025 08:49:09 +0000 (0:00:00.175) 0:01:13.773 **** 2025-02-19 08:49:09.656921 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:09.658139 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:09.658915 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:09.661291 | orchestrator | 2025-02-19 08:49:09.661839 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-19 08:49:09.661879 | orchestrator | Wednesday 19 February 2025 08:49:09 +0000 (0:00:00.156) 0:01:13.930 **** 2025-02-19 08:49:09.833873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:09.834421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:09.835508 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:09.836102 | orchestrator | 2025-02-19 08:49:09.837723 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-19 08:49:09.838527 | orchestrator | Wednesday 19 February 2025 08:49:09 +0000 (0:00:00.177) 0:01:14.107 **** 2025-02-19 08:49:10.004718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:10.005550 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:10.006160 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:10.008865 | orchestrator | 2025-02-19 08:49:10.009007 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-19 08:49:10.009621 | orchestrator | Wednesday 19 February 2025 08:49:09 +0000 (0:00:00.170) 0:01:14.277 **** 2025-02-19 08:49:10.176008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:10.176567 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:10.176635 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:10.176843 | orchestrator | 2025-02-19 08:49:10.176886 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-19 08:49:10.177067 | orchestrator | Wednesday 19 February 2025 08:49:10 +0000 (0:00:00.172) 0:01:14.450 **** 2025-02-19 08:49:10.558317 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:10.559571 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:10.561590 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:10.561781 | orchestrator | 2025-02-19 08:49:10.749518 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-19 08:49:10.749620 | orchestrator | Wednesday 19 February 2025 08:49:10 +0000 (0:00:00.380) 0:01:14.830 **** 2025-02-19 08:49:10.749667 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:10.750557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:10.751304 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:10.751777 | orchestrator | 2025-02-19 08:49:10.752502 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-19 08:49:10.753335 | orchestrator | Wednesday 19 February 2025 08:49:10 +0000 (0:00:00.192) 0:01:15.022 **** 2025-02-19 08:49:10.925001 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:10.927727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:10.928299 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:10.929214 | orchestrator | 2025-02-19 08:49:10.929701 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-19 08:49:10.930389 | orchestrator | Wednesday 19 February 2025 08:49:10 +0000 (0:00:00.175) 0:01:15.198 **** 2025-02-19 08:49:11.467631 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:49:11.468845 | orchestrator | 2025-02-19 08:49:11.469381 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-19 08:49:11.469724 | orchestrator | Wednesday 19 February 2025 08:49:11 +0000 (0:00:00.543) 0:01:15.741 **** 2025-02-19 08:49:11.978567 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:49:11.980248 | orchestrator | 2025-02-19 08:49:11.981085 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-19 08:49:11.981738 | orchestrator | Wednesday 19 February 2025 08:49:11 +0000 (0:00:00.508) 0:01:16.249 **** 2025-02-19 08:49:12.147910 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:49:12.148088 | orchestrator | 2025-02-19 08:49:12.148342 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-19 08:49:12.149212 | orchestrator | Wednesday 19 February 2025 08:49:12 +0000 (0:00:00.170) 0:01:16.420 **** 2025-02-19 08:49:12.328035 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'vg_name': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'}) 2025-02-19 08:49:12.328257 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'vg_name': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'}) 2025-02-19 08:49:12.329093 | orchestrator | 2025-02-19 08:49:12.329683 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-19 08:49:12.329783 | orchestrator | Wednesday 19 February 2025 08:49:12 +0000 (0:00:00.180) 0:01:16.601 **** 2025-02-19 08:49:12.515907 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:12.517212 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:12.517345 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:12.517374 | orchestrator | 2025-02-19 08:49:12.517861 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-19 08:49:12.521283 | orchestrator | Wednesday 19 February 2025 08:49:12 +0000 (0:00:00.184) 0:01:16.786 **** 2025-02-19 08:49:12.704346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:12.705428 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:12.708402 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:12.903821 | orchestrator | 2025-02-19 08:49:12.904022 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-19 08:49:12.904064 | orchestrator | Wednesday 19 February 2025 08:49:12 +0000 (0:00:00.189) 0:01:16.976 **** 2025-02-19 08:49:12.904115 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'})  2025-02-19 08:49:12.904238 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'})  2025-02-19 08:49:12.905686 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:12.908206 | orchestrator | 2025-02-19 08:49:13.516623 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-19 08:49:13.516757 | orchestrator | Wednesday 19 February 2025 08:49:12 +0000 (0:00:00.199) 0:01:17.175 **** 2025-02-19 08:49:13.516779 | orchestrator | ok: [testbed-node-5] => { 2025-02-19 08:49:13.517896 | orchestrator |  "lvm_report": { 2025-02-19 08:49:13.518850 | orchestrator |  "lv": [ 2025-02-19 08:49:13.519993 | orchestrator |  { 2025-02-19 08:49:13.520784 | orchestrator |  "lv_name": "osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3", 2025-02-19 08:49:13.521756 | orchestrator |  "vg_name": "ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3" 2025-02-19 08:49:13.522615 | orchestrator |  }, 2025-02-19 08:49:13.523378 | orchestrator |  { 2025-02-19 08:49:13.524220 | orchestrator |  "lv_name": "osd-block-45b4b457-0c8f-5565-8330-30b761ce6399", 2025-02-19 08:49:13.524838 | orchestrator |  "vg_name": "ceph-45b4b457-0c8f-5565-8330-30b761ce6399" 2025-02-19 08:49:13.525802 | orchestrator |  } 2025-02-19 08:49:13.526309 | orchestrator |  ], 2025-02-19 08:49:13.527050 | orchestrator |  "pv": [ 2025-02-19 08:49:13.527895 | orchestrator |  { 2025-02-19 08:49:13.528598 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-19 08:49:13.529200 | orchestrator |  "vg_name": "ceph-45b4b457-0c8f-5565-8330-30b761ce6399" 2025-02-19 08:49:13.529967 | orchestrator |  }, 2025-02-19 08:49:13.530513 | orchestrator |  { 2025-02-19 08:49:13.530988 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-19 08:49:13.531608 | orchestrator |  "vg_name": "ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3" 2025-02-19 08:49:13.532119 | orchestrator |  } 2025-02-19 08:49:13.532755 | orchestrator |  ] 2025-02-19 08:49:13.533211 | orchestrator |  } 2025-02-19 08:49:13.533944 | orchestrator | } 2025-02-19 08:49:13.535059 | orchestrator | 2025-02-19 08:49:13.535984 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:49:13.536015 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-19 08:49:13.536037 | orchestrator | 2025-02-19 08:49:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:49:13.536237 | orchestrator | 2025-02-19 08:49:13 | INFO  | Please wait and do not abort execution. 2025-02-19 08:49:13.536278 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-19 08:49:13.536899 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-19 08:49:13.537230 | orchestrator | 2025-02-19 08:49:13.537686 | orchestrator | 2025-02-19 08:49:13.537939 | orchestrator | 2025-02-19 08:49:13.538298 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:49:13.538659 | orchestrator | Wednesday 19 February 2025 08:49:13 +0000 (0:00:00.614) 0:01:17.790 **** 2025-02-19 08:49:13.539066 | orchestrator | =============================================================================== 2025-02-19 08:49:13.539583 | orchestrator | Create block VGs -------------------------------------------------------- 6.13s 2025-02-19 08:49:13.539728 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2025-02-19 08:49:13.540072 | orchestrator | Print LVM report data --------------------------------------------------- 2.43s 2025-02-19 08:49:13.540381 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.98s 2025-02-19 08:49:13.540707 | orchestrator | Add known links to the list of available block devices ------------------ 1.98s 2025-02-19 08:49:13.540970 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.81s 2025-02-19 08:49:13.541303 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.80s 2025-02-19 08:49:13.541596 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.63s 2025-02-19 08:49:13.541966 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.59s 2025-02-19 08:49:13.542265 | orchestrator | Add known partitions to the list of available block devices ------------- 1.53s 2025-02-19 08:49:13.542624 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.91s 2025-02-19 08:49:13.542928 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2025-02-19 08:49:13.543277 | orchestrator | Calculate size needed for WAL LVs on ceph_db_wal_devices ---------------- 0.85s 2025-02-19 08:49:13.543538 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-02-19 08:49:13.543808 | orchestrator | Get initial list of available block devices ----------------------------- 0.77s 2025-02-19 08:49:13.544221 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2025-02-19 08:49:13.544514 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.75s 2025-02-19 08:49:13.545739 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.73s 2025-02-19 08:49:13.545782 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-02-19 08:49:15.500450 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-02-19 08:49:15.500606 | orchestrator | 2025-02-19 08:49:15 | INFO  | Task 9d902b77-d578-4515-bf82-d7654076cd0e (facts) was prepared for execution. 2025-02-19 08:49:18.905614 | orchestrator | 2025-02-19 08:49:15 | INFO  | It takes a moment until task 9d902b77-d578-4515-bf82-d7654076cd0e (facts) has been started and output is visible here. 2025-02-19 08:49:18.905833 | orchestrator | 2025-02-19 08:49:18.905910 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-02-19 08:49:18.907030 | orchestrator | 2025-02-19 08:49:18.907697 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-19 08:49:18.910802 | orchestrator | Wednesday 19 February 2025 08:49:18 +0000 (0:00:00.282) 0:00:00.282 **** 2025-02-19 08:49:20.039430 | orchestrator | ok: [testbed-manager] 2025-02-19 08:49:20.040747 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:49:20.041566 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:49:20.042408 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:49:20.043786 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:49:20.043978 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:49:20.044355 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:49:20.044997 | orchestrator | 2025-02-19 08:49:20.045794 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-19 08:49:20.046212 | orchestrator | Wednesday 19 February 2025 08:49:20 +0000 (0:00:01.131) 0:00:01.413 **** 2025-02-19 08:49:20.213397 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:49:20.299433 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:49:20.381243 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:49:20.458337 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:49:20.542347 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:49:21.306149 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:49:21.308820 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:21.310632 | orchestrator | 2025-02-19 08:49:21.311920 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-19 08:49:21.313122 | orchestrator | 2025-02-19 08:49:21.313845 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-19 08:49:21.316771 | orchestrator | Wednesday 19 February 2025 08:49:21 +0000 (0:00:01.269) 0:00:02.683 **** 2025-02-19 08:49:26.072552 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:49:26.074866 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:49:26.074943 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:49:26.074967 | orchestrator | ok: [testbed-manager] 2025-02-19 08:49:26.075002 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:49:26.078377 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:49:26.078510 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:49:26.078863 | orchestrator | 2025-02-19 08:49:26.079188 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-19 08:49:26.079586 | orchestrator | 2025-02-19 08:49:26.080366 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-19 08:49:26.080460 | orchestrator | Wednesday 19 February 2025 08:49:26 +0000 (0:00:04.767) 0:00:07.450 **** 2025-02-19 08:49:26.442827 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:49:26.524264 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:49:26.606857 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:49:26.686947 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:49:26.759952 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:49:26.810283 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:49:26.813080 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:49:26.813999 | orchestrator | 2025-02-19 08:49:26.814463 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:49:26.814882 | orchestrator | 2025-02-19 08:49:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 08:49:26.815177 | orchestrator | 2025-02-19 08:49:26 | INFO  | Please wait and do not abort execution. 2025-02-19 08:49:26.816227 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:49:26.817290 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:49:26.818432 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:49:26.819360 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:49:26.820585 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:49:26.821003 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:49:26.821615 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:49:26.822549 | orchestrator | 2025-02-19 08:49:26.823093 | orchestrator | 2025-02-19 08:49:26.823637 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:49:26.823966 | orchestrator | Wednesday 19 February 2025 08:49:26 +0000 (0:00:00.737) 0:00:08.188 **** 2025-02-19 08:49:26.824474 | orchestrator | =============================================================================== 2025-02-19 08:49:26.824875 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.77s 2025-02-19 08:49:26.825301 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-02-19 08:49:26.825785 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2025-02-19 08:49:26.826136 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.74s 2025-02-19 08:49:27.411007 | orchestrator | 2025-02-19 08:49:27.413347 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Feb 19 08:49:27 UTC 2025 2025-02-19 08:49:28.825250 | orchestrator | 2025-02-19 08:49:28.825375 | orchestrator | 2025-02-19 08:49:28 | INFO  | Collection nutshell is prepared for execution 2025-02-19 08:49:28.828688 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [0] - dotfiles 2025-02-19 08:49:28.828733 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [0] - homer 2025-02-19 08:49:28.829947 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [0] - netdata 2025-02-19 08:49:28.829975 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [0] - openstackclient 2025-02-19 08:49:28.829991 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [0] - phpmyadmin 2025-02-19 08:49:28.830007 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [0] - common 2025-02-19 08:49:28.830073 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [1] -- loadbalancer 2025-02-19 08:49:28.830192 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [2] --- opensearch 2025-02-19 08:49:28.830211 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [2] --- mariadb-ng 2025-02-19 08:49:28.830230 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [3] ---- horizon 2025-02-19 08:49:28.830575 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [3] ---- keystone 2025-02-19 08:49:28.830604 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [4] ----- neutron 2025-02-19 08:49:28.830620 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [5] ------ wait-for-nova 2025-02-19 08:49:28.830635 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [5] ------ octavia 2025-02-19 08:49:28.830702 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [4] ----- barbican 2025-02-19 08:49:28.830775 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [4] ----- designate 2025-02-19 08:49:28.830795 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [4] ----- ironic 2025-02-19 08:49:28.831021 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [4] ----- placement 2025-02-19 08:49:28.831046 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [4] ----- magnum 2025-02-19 08:49:28.831067 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [1] -- openvswitch 2025-02-19 08:49:28.831339 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [2] --- ovn 2025-02-19 08:49:28.831627 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [1] -- memcached 2025-02-19 08:49:28.831672 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [1] -- redis 2025-02-19 08:49:28.831687 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [1] -- rabbitmq-ng 2025-02-19 08:49:28.831705 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [0] - kubernetes 2025-02-19 08:49:28.831996 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [1] -- kubeconfig 2025-02-19 08:49:28.832021 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [1] -- copy-kubeconfig 2025-02-19 08:49:28.832042 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [0] - ceph 2025-02-19 08:49:28.833281 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [1] -- ceph-pools 2025-02-19 08:49:28.833366 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [2] --- copy-ceph-keys 2025-02-19 08:49:28.833383 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [3] ---- cephclient 2025-02-19 08:49:28.833398 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-02-19 08:49:28.833413 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [4] ----- wait-for-keystone 2025-02-19 08:49:28.833430 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [5] ------ kolla-ceph-rgw 2025-02-19 08:49:28.833565 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [5] ------ glance 2025-02-19 08:49:28.833587 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [5] ------ cinder 2025-02-19 08:49:28.833603 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [5] ------ nova 2025-02-19 08:49:28.833623 | orchestrator | 2025-02-19 08:49:28 | INFO  | A [4] ----- prometheus 2025-02-19 08:49:28.970890 | orchestrator | 2025-02-19 08:49:28 | INFO  | D [5] ------ grafana 2025-02-19 08:49:28.970993 | orchestrator | 2025-02-19 08:49:28 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-02-19 08:49:30.785018 | orchestrator | 2025-02-19 08:49:28 | INFO  | Tasks are running in the background 2025-02-19 08:49:30.785143 | orchestrator | 2025-02-19 08:49:30 | INFO  | No task IDs specified, wait for all currently running tasks 2025-02-19 08:49:32.894736 | orchestrator | 2025-02-19 08:49:32 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:49:32.896052 | orchestrator | 2025-02-19 08:49:32 | INFO  | Task ba868eee-a594-4af6-a6f0-f3600e4803cf is in state STARTED 2025-02-19 08:49:32.896746 | orchestrator | 2025-02-19 08:49:32 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:49:32.897508 | orchestrator | 2025-02-19 08:49:32 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:49:32.900428 | orchestrator | 2025-02-19 08:49:32 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:49:32.900794 | orchestrator | 2025-02-19 08:49:32 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:49:35.952469 | orchestrator | 2025-02-19 08:49:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:49:35.952749 | orchestrator | 2025-02-19 08:49:35 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:49:35.952855 | orchestrator | 2025-02-19 08:49:35 | INFO  | Task ba868eee-a594-4af6-a6f0-f3600e4803cf is in state STARTED 2025-02-19 08:49:35.953257 | orchestrator | 2025-02-19 08:49:35 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:49:35.953827 | orchestrator | 2025-02-19 08:49:35 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:49:35.954382 | orchestrator | 2025-02-19 08:49:35 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:49:35.955019 | orchestrator | 2025-02-19 08:49:35 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:49:35.955133 | orchestrator | 2025-02-19 08:49:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:49:39.001395 | orchestrator | 2025-02-19 08:49:38 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:49:39.001804 | orchestrator | 2025-02-19 08:49:38 | INFO  | Task ba868eee-a594-4af6-a6f0-f3600e4803cf is in state STARTED 2025-02-19 08:49:39.002406 | orchestrator | 2025-02-19 08:49:39 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:49:39.005586 | orchestrator | 2025-02-19 08:49:39 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:49:39.007323 | orchestrator | 2025-02-19 08:49:39 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:49:39.008863 | orchestrator | 2025-02-19 08:49:39 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:49:42.081240 | orchestrator | 2025-02-19 08:49:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:49:42.081370 | orchestrator | 2025-02-19 08:49:42 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:49:42.086394 | orchestrator | 2025-02-19 08:49:42 | INFO  | Task ba868eee-a594-4af6-a6f0-f3600e4803cf is in state STARTED 2025-02-19 08:49:42.086455 | orchestrator | 2025-02-19 08:49:42 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:49:42.091013 | orchestrator | 2025-02-19 08:49:42 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:49:42.091067 | orchestrator | 2025-02-19 08:49:42 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:49:42.091102 | orchestrator | 2025-02-19 08:49:42 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:49:45.198126 | orchestrator | 2025-02-19 08:49:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:49:45.198290 | orchestrator | 2025-02-19 08:49:45 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:49:45.198421 | orchestrator | 2025-02-19 08:49:45 | INFO  | Task ba868eee-a594-4af6-a6f0-f3600e4803cf is in state STARTED 2025-02-19 08:49:45.198966 | orchestrator | 2025-02-19 08:49:45 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:49:45.200076 | orchestrator | 2025-02-19 08:49:45 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:49:45.201619 | orchestrator | 2025-02-19 08:49:45 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:49:45.204965 | orchestrator | 2025-02-19 08:49:45 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:49:48.315369 | orchestrator | 2025-02-19 08:49:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:49:48.315470 | orchestrator | 2025-02-19 08:49:48 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:49:48.321691 | orchestrator | 2025-02-19 08:49:48 | INFO  | Task ba868eee-a594-4af6-a6f0-f3600e4803cf is in state STARTED 2025-02-19 08:49:48.338970 | orchestrator | 2025-02-19 08:49:48 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:49:48.343542 | orchestrator | 2025-02-19 08:49:48 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:49:48.343583 | orchestrator | 2025-02-19 08:49:48 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:49:48.343605 | orchestrator | 2025-02-19 08:49:48 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:49:51.395736 | orchestrator | 2025-02-19 08:49:48 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:49:51.395890 | orchestrator | 2025-02-19 08:49:51 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:49:51.396241 | orchestrator | 2025-02-19 08:49:51 | INFO  | Task ba868eee-a594-4af6-a6f0-f3600e4803cf is in state STARTED 2025-02-19 08:49:51.399127 | orchestrator | 2025-02-19 08:49:51 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:49:51.400802 | orchestrator | 2025-02-19 08:49:51 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:49:51.403321 | orchestrator | 2025-02-19 08:49:51 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:49:51.407975 | orchestrator | 2025-02-19 08:49:51 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:49:54.479392 | orchestrator | 2025-02-19 08:49:51 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:49:54.479551 | orchestrator | 2025-02-19 08:49:54 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:49:54.485691 | orchestrator | 2025-02-19 08:49:54 | INFO  | Task ba868eee-a594-4af6-a6f0-f3600e4803cf is in state STARTED 2025-02-19 08:49:54.487832 | orchestrator | 2025-02-19 08:49:54 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:49:54.497988 | orchestrator | 2025-02-19 08:49:54 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:49:54.505686 | orchestrator | 2025-02-19 08:49:54 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:49:54.515742 | orchestrator | 2025-02-19 08:49:54 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:49:57.652105 | orchestrator | 2025-02-19 08:49:54 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:49:57.652238 | orchestrator | 2025-02-19 08:49:57 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:49:57.652489 | orchestrator | 2025-02-19 08:49:57 | INFO  | Task ba868eee-a594-4af6-a6f0-f3600e4803cf is in state SUCCESS 2025-02-19 08:49:57.652518 | orchestrator | 2025-02-19 08:49:57.652531 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-02-19 08:49:57.652543 | orchestrator | 2025-02-19 08:49:57.652554 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-02-19 08:49:57.652566 | orchestrator | Wednesday 19 February 2025 08:49:37 +0000 (0:00:00.421) 0:00:00.421 **** 2025-02-19 08:49:57.652577 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:49:57.652590 | orchestrator | changed: [testbed-manager] 2025-02-19 08:49:57.652601 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:49:57.652612 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:49:57.652623 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:49:57.652635 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:49:57.652689 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:49:57.652701 | orchestrator | 2025-02-19 08:49:57.652712 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-02-19 08:49:57.652724 | orchestrator | Wednesday 19 February 2025 08:49:40 +0000 (0:00:03.635) 0:00:04.057 **** 2025-02-19 08:49:57.652735 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-02-19 08:49:57.652765 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-02-19 08:49:57.652788 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-02-19 08:49:57.652799 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-02-19 08:49:57.652810 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-02-19 08:49:57.652821 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-02-19 08:49:57.652833 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-02-19 08:49:57.652843 | orchestrator | 2025-02-19 08:49:57.652855 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-02-19 08:49:57.652866 | orchestrator | Wednesday 19 February 2025 08:49:43 +0000 (0:00:02.386) 0:00:06.443 **** 2025-02-19 08:49:57.652880 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-19 08:49:41.940936', 'end': '2025-02-19 08:49:41.949455', 'delta': '0:00:00.008519', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-19 08:49:57.652917 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-19 08:49:41.914239', 'end': '2025-02-19 08:49:41.922782', 'delta': '0:00:00.008543', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-19 08:49:57.652930 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-19 08:49:42.153254', 'end': '2025-02-19 08:49:42.159635', 'delta': '0:00:00.006381', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-19 08:49:57.652966 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-19 08:49:42.061552', 'end': '2025-02-19 08:49:42.069198', 'delta': '0:00:00.007646', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-19 08:49:57.652979 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-19 08:49:42.462666', 'end': '2025-02-19 08:49:42.471543', 'delta': '0:00:00.008877', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-19 08:49:57.652998 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-19 08:49:42.659259', 'end': '2025-02-19 08:49:42.667319', 'delta': '0:00:00.008060', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-19 08:49:57.653014 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-19 08:49:42.968469', 'end': '2025-02-19 08:49:42.975024', 'delta': '0:00:00.006555', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-19 08:49:57.653025 | orchestrator | 2025-02-19 08:49:57.653037 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-02-19 08:49:57.653048 | orchestrator | Wednesday 19 February 2025 08:49:47 +0000 (0:00:04.199) 0:00:10.642 **** 2025-02-19 08:49:57.653059 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-02-19 08:49:57.653071 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-02-19 08:49:57.653082 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-02-19 08:49:57.653093 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-02-19 08:49:57.653104 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-02-19 08:49:57.653117 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-02-19 08:49:57.653129 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-02-19 08:49:57.653141 | orchestrator | 2025-02-19 08:49:57.653154 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-02-19 08:49:57.653166 | orchestrator | Wednesday 19 February 2025 08:49:50 +0000 (0:00:02.637) 0:00:13.280 **** 2025-02-19 08:49:57.653179 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-02-19 08:49:57.653192 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-02-19 08:49:57.653204 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-02-19 08:49:57.653216 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-02-19 08:49:57.653228 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-02-19 08:49:57.653241 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-02-19 08:49:57.653253 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-02-19 08:49:57.653266 | orchestrator | 2025-02-19 08:49:57.653278 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:49:57.653296 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:49:57.659090 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:49:57.659131 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:49:57.659146 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:49:57.659181 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:49:57.659196 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:49:57.659210 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:49:57.659224 | orchestrator | 2025-02-19 08:49:57.659238 | orchestrator | 2025-02-19 08:49:57.659252 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:49:57.659267 | orchestrator | Wednesday 19 February 2025 08:49:53 +0000 (0:00:03.530) 0:00:16.811 **** 2025-02-19 08:49:57.659281 | orchestrator | =============================================================================== 2025-02-19 08:49:57.659295 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 4.20s 2025-02-19 08:49:57.659309 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.64s 2025-02-19 08:49:57.659323 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.53s 2025-02-19 08:49:57.659337 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.64s 2025-02-19 08:49:57.659351 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.39s 2025-02-19 08:49:57.659373 | orchestrator | 2025-02-19 08:49:57 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:49:57.659946 | orchestrator | 2025-02-19 08:49:57 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:49:57.664063 | orchestrator | 2025-02-19 08:49:57 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:49:57.666356 | orchestrator | 2025-02-19 08:49:57 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:49:57.666393 | orchestrator | 2025-02-19 08:49:57 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:50:00.749868 | orchestrator | 2025-02-19 08:49:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:00.750013 | orchestrator | 2025-02-19 08:50:00 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:00.753114 | orchestrator | 2025-02-19 08:50:00 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:50:00.753808 | orchestrator | 2025-02-19 08:50:00 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:00.757178 | orchestrator | 2025-02-19 08:50:00 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:00.762002 | orchestrator | 2025-02-19 08:50:00 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:00.764689 | orchestrator | 2025-02-19 08:50:00 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:50:03.891602 | orchestrator | 2025-02-19 08:50:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:03.891804 | orchestrator | 2025-02-19 08:50:03 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:03.892170 | orchestrator | 2025-02-19 08:50:03 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:50:03.892213 | orchestrator | 2025-02-19 08:50:03 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:03.900004 | orchestrator | 2025-02-19 08:50:03 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:03.903914 | orchestrator | 2025-02-19 08:50:03 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:03.905071 | orchestrator | 2025-02-19 08:50:03 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:50:06.991033 | orchestrator | 2025-02-19 08:50:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:06.991195 | orchestrator | 2025-02-19 08:50:06 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:06.991271 | orchestrator | 2025-02-19 08:50:06 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:50:06.994196 | orchestrator | 2025-02-19 08:50:06 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:06.995461 | orchestrator | 2025-02-19 08:50:06 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:06.998690 | orchestrator | 2025-02-19 08:50:06 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:07.001889 | orchestrator | 2025-02-19 08:50:07 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:50:10.072438 | orchestrator | 2025-02-19 08:50:07 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:10.072539 | orchestrator | 2025-02-19 08:50:10 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:10.082299 | orchestrator | 2025-02-19 08:50:10 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:50:10.090323 | orchestrator | 2025-02-19 08:50:10 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:10.090444 | orchestrator | 2025-02-19 08:50:10 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:10.090583 | orchestrator | 2025-02-19 08:50:10 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:10.100898 | orchestrator | 2025-02-19 08:50:10 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:50:13.186959 | orchestrator | 2025-02-19 08:50:10 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:13.187075 | orchestrator | 2025-02-19 08:50:13 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:13.194129 | orchestrator | 2025-02-19 08:50:13 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:50:13.204080 | orchestrator | 2025-02-19 08:50:13 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:13.204175 | orchestrator | 2025-02-19 08:50:13 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:13.227258 | orchestrator | 2025-02-19 08:50:13 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:16.291730 | orchestrator | 2025-02-19 08:50:13 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state STARTED 2025-02-19 08:50:16.291858 | orchestrator | 2025-02-19 08:50:13 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:16.291897 | orchestrator | 2025-02-19 08:50:16 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:16.292895 | orchestrator | 2025-02-19 08:50:16 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:50:16.292936 | orchestrator | 2025-02-19 08:50:16 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:16.293993 | orchestrator | 2025-02-19 08:50:16 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:16.296145 | orchestrator | 2025-02-19 08:50:16 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:16.296208 | orchestrator | 2025-02-19 08:50:16 | INFO  | Task 0c4c5236-aaef-4332-9a38-c58ae5c2b699 is in state SUCCESS 2025-02-19 08:50:19.365190 | orchestrator | 2025-02-19 08:50:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:19.365327 | orchestrator | 2025-02-19 08:50:19 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:22.421163 | orchestrator | 2025-02-19 08:50:19 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:50:22.421314 | orchestrator | 2025-02-19 08:50:19 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:22.421335 | orchestrator | 2025-02-19 08:50:19 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:22.421350 | orchestrator | 2025-02-19 08:50:19 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:22.421364 | orchestrator | 2025-02-19 08:50:19 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:22.421378 | orchestrator | 2025-02-19 08:50:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:22.421411 | orchestrator | 2025-02-19 08:50:22 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:25.468902 | orchestrator | 2025-02-19 08:50:22 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:50:25.469040 | orchestrator | 2025-02-19 08:50:22 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:25.469064 | orchestrator | 2025-02-19 08:50:22 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:25.469078 | orchestrator | 2025-02-19 08:50:22 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:25.469091 | orchestrator | 2025-02-19 08:50:22 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:25.469105 | orchestrator | 2025-02-19 08:50:22 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:25.469134 | orchestrator | 2025-02-19 08:50:25 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:25.469479 | orchestrator | 2025-02-19 08:50:25 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:50:25.470870 | orchestrator | 2025-02-19 08:50:25 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:25.471290 | orchestrator | 2025-02-19 08:50:25 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:25.473304 | orchestrator | 2025-02-19 08:50:25 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:25.475752 | orchestrator | 2025-02-19 08:50:25 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:28.544593 | orchestrator | 2025-02-19 08:50:25 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:28.544790 | orchestrator | 2025-02-19 08:50:28 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:28.550100 | orchestrator | 2025-02-19 08:50:28 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state STARTED 2025-02-19 08:50:28.553607 | orchestrator | 2025-02-19 08:50:28 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:28.556672 | orchestrator | 2025-02-19 08:50:28 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:28.556749 | orchestrator | 2025-02-19 08:50:28 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:31.638703 | orchestrator | 2025-02-19 08:50:28 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:31.638788 | orchestrator | 2025-02-19 08:50:28 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:31.638807 | orchestrator | 2025-02-19 08:50:31 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:31.638867 | orchestrator | 2025-02-19 08:50:31 | INFO  | Task abf72639-7fb6-4ef3-876d-912d01694a06 is in state SUCCESS 2025-02-19 08:50:31.638878 | orchestrator | 2025-02-19 08:50:31 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:31.641417 | orchestrator | 2025-02-19 08:50:31 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:31.644151 | orchestrator | 2025-02-19 08:50:31 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:34.740141 | orchestrator | 2025-02-19 08:50:31 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:34.740267 | orchestrator | 2025-02-19 08:50:31 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:34.740305 | orchestrator | 2025-02-19 08:50:34 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:34.740879 | orchestrator | 2025-02-19 08:50:34 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:34.740910 | orchestrator | 2025-02-19 08:50:34 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:34.740934 | orchestrator | 2025-02-19 08:50:34 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:37.788127 | orchestrator | 2025-02-19 08:50:34 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:37.788248 | orchestrator | 2025-02-19 08:50:34 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:37.788288 | orchestrator | 2025-02-19 08:50:37 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:37.789445 | orchestrator | 2025-02-19 08:50:37 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:37.789529 | orchestrator | 2025-02-19 08:50:37 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:37.792379 | orchestrator | 2025-02-19 08:50:37 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:37.792426 | orchestrator | 2025-02-19 08:50:37 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:40.887942 | orchestrator | 2025-02-19 08:50:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:40.888133 | orchestrator | 2025-02-19 08:50:40 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:40.891031 | orchestrator | 2025-02-19 08:50:40 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:40.892489 | orchestrator | 2025-02-19 08:50:40 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:40.897851 | orchestrator | 2025-02-19 08:50:40 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:40.901573 | orchestrator | 2025-02-19 08:50:40 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:43.964190 | orchestrator | 2025-02-19 08:50:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:43.964339 | orchestrator | 2025-02-19 08:50:43 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:43.966403 | orchestrator | 2025-02-19 08:50:43 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:43.971974 | orchestrator | 2025-02-19 08:50:43 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:43.979460 | orchestrator | 2025-02-19 08:50:43 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:43.981388 | orchestrator | 2025-02-19 08:50:43 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:47.056265 | orchestrator | 2025-02-19 08:50:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:47.056361 | orchestrator | 2025-02-19 08:50:47 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:47.058047 | orchestrator | 2025-02-19 08:50:47 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:47.060030 | orchestrator | 2025-02-19 08:50:47 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:47.066218 | orchestrator | 2025-02-19 08:50:47 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:50.124986 | orchestrator | 2025-02-19 08:50:47 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:50.125209 | orchestrator | 2025-02-19 08:50:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:50.125254 | orchestrator | 2025-02-19 08:50:50 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:50.125348 | orchestrator | 2025-02-19 08:50:50 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:50.125793 | orchestrator | 2025-02-19 08:50:50 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:50.127917 | orchestrator | 2025-02-19 08:50:50 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:53.217713 | orchestrator | 2025-02-19 08:50:50 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:53.217830 | orchestrator | 2025-02-19 08:50:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:53.217862 | orchestrator | 2025-02-19 08:50:53 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:53.221281 | orchestrator | 2025-02-19 08:50:53 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:53.221317 | orchestrator | 2025-02-19 08:50:53 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:53.221338 | orchestrator | 2025-02-19 08:50:53 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:53.223494 | orchestrator | 2025-02-19 08:50:53 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:53.223917 | orchestrator | 2025-02-19 08:50:53 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:56.302007 | orchestrator | 2025-02-19 08:50:56 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:56.307065 | orchestrator | 2025-02-19 08:50:56 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:56.309170 | orchestrator | 2025-02-19 08:50:56 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:56.309682 | orchestrator | 2025-02-19 08:50:56 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:56.313823 | orchestrator | 2025-02-19 08:50:56 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:50:56.313900 | orchestrator | 2025-02-19 08:50:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:50:59.377153 | orchestrator | 2025-02-19 08:50:59 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state STARTED 2025-02-19 08:50:59.378579 | orchestrator | 2025-02-19 08:50:59 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:50:59.379822 | orchestrator | 2025-02-19 08:50:59 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:50:59.379891 | orchestrator | 2025-02-19 08:50:59 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:50:59.389112 | orchestrator | 2025-02-19 08:50:59 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:51:02.472023 | orchestrator | 2025-02-19 08:50:59 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:02.472149 | orchestrator | 2025-02-19 08:51:02.472169 | orchestrator | 2025-02-19 08:51:02.472185 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-02-19 08:51:02.472201 | orchestrator | 2025-02-19 08:51:02.472216 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-02-19 08:51:02.472232 | orchestrator | Wednesday 19 February 2025 08:49:37 +0000 (0:00:00.299) 0:00:00.299 **** 2025-02-19 08:51:02.472247 | orchestrator | ok: [testbed-manager] => { 2025-02-19 08:51:02.472264 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-02-19 08:51:02.472280 | orchestrator | } 2025-02-19 08:51:02.472294 | orchestrator | 2025-02-19 08:51:02.472310 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-02-19 08:51:02.472324 | orchestrator | Wednesday 19 February 2025 08:49:37 +0000 (0:00:00.188) 0:00:00.488 **** 2025-02-19 08:51:02.472340 | orchestrator | ok: [testbed-manager] 2025-02-19 08:51:02.472356 | orchestrator | 2025-02-19 08:51:02.472371 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-02-19 08:51:02.472386 | orchestrator | Wednesday 19 February 2025 08:49:39 +0000 (0:00:01.561) 0:00:02.049 **** 2025-02-19 08:51:02.472401 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-02-19 08:51:02.472415 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-02-19 08:51:02.472431 | orchestrator | 2025-02-19 08:51:02.472447 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-02-19 08:51:02.472461 | orchestrator | Wednesday 19 February 2025 08:49:40 +0000 (0:00:01.119) 0:00:03.169 **** 2025-02-19 08:51:02.472476 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.472491 | orchestrator | 2025-02-19 08:51:02.472506 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-02-19 08:51:02.472521 | orchestrator | Wednesday 19 February 2025 08:49:44 +0000 (0:00:03.695) 0:00:06.865 **** 2025-02-19 08:51:02.472536 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.472550 | orchestrator | 2025-02-19 08:51:02.472566 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-02-19 08:51:02.472581 | orchestrator | Wednesday 19 February 2025 08:49:46 +0000 (0:00:02.016) 0:00:08.882 **** 2025-02-19 08:51:02.472597 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-02-19 08:51:02.472612 | orchestrator | ok: [testbed-manager] 2025-02-19 08:51:02.472629 | orchestrator | 2025-02-19 08:51:02.472677 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-02-19 08:51:02.472694 | orchestrator | Wednesday 19 February 2025 08:50:11 +0000 (0:00:25.642) 0:00:34.524 **** 2025-02-19 08:51:02.472711 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.472726 | orchestrator | 2025-02-19 08:51:02.472742 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:51:02.472757 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:51:02.472794 | orchestrator | 2025-02-19 08:51:02.472810 | orchestrator | 2025-02-19 08:51:02.472825 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:51:02.472838 | orchestrator | Wednesday 19 February 2025 08:50:15 +0000 (0:00:03.655) 0:00:38.180 **** 2025-02-19 08:51:02.472852 | orchestrator | =============================================================================== 2025-02-19 08:51:02.472866 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.64s 2025-02-19 08:51:02.472879 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.70s 2025-02-19 08:51:02.472894 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.66s 2025-02-19 08:51:02.472909 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.02s 2025-02-19 08:51:02.472926 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.56s 2025-02-19 08:51:02.472941 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.12s 2025-02-19 08:51:02.472957 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.19s 2025-02-19 08:51:02.472971 | orchestrator | 2025-02-19 08:51:02.472986 | orchestrator | 2025-02-19 08:51:02.473001 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-02-19 08:51:02.473015 | orchestrator | 2025-02-19 08:51:02.473030 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-02-19 08:51:02.473044 | orchestrator | Wednesday 19 February 2025 08:49:37 +0000 (0:00:00.418) 0:00:00.418 **** 2025-02-19 08:51:02.473059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-02-19 08:51:02.473075 | orchestrator | 2025-02-19 08:51:02.473090 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-02-19 08:51:02.473104 | orchestrator | Wednesday 19 February 2025 08:49:38 +0000 (0:00:00.393) 0:00:00.811 **** 2025-02-19 08:51:02.473119 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-02-19 08:51:02.473133 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-02-19 08:51:02.473148 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-02-19 08:51:02.473163 | orchestrator | 2025-02-19 08:51:02.473177 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-02-19 08:51:02.473192 | orchestrator | Wednesday 19 February 2025 08:49:39 +0000 (0:00:01.559) 0:00:02.371 **** 2025-02-19 08:51:02.473206 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.473222 | orchestrator | 2025-02-19 08:51:02.473237 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-02-19 08:51:02.473252 | orchestrator | Wednesday 19 February 2025 08:49:41 +0000 (0:00:01.797) 0:00:04.169 **** 2025-02-19 08:51:02.473279 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-02-19 08:51:02.473295 | orchestrator | ok: [testbed-manager] 2025-02-19 08:51:02.473310 | orchestrator | 2025-02-19 08:51:02.473325 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-02-19 08:51:02.473340 | orchestrator | Wednesday 19 February 2025 08:50:18 +0000 (0:00:36.613) 0:00:40.782 **** 2025-02-19 08:51:02.473356 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.473371 | orchestrator | 2025-02-19 08:51:02.473386 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-02-19 08:51:02.473452 | orchestrator | Wednesday 19 February 2025 08:50:19 +0000 (0:00:01.248) 0:00:42.031 **** 2025-02-19 08:51:02.473470 | orchestrator | ok: [testbed-manager] 2025-02-19 08:51:02.473487 | orchestrator | 2025-02-19 08:51:02.473503 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-02-19 08:51:02.473520 | orchestrator | Wednesday 19 February 2025 08:50:20 +0000 (0:00:01.098) 0:00:43.130 **** 2025-02-19 08:51:02.473548 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.473564 | orchestrator | 2025-02-19 08:51:02.473580 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-02-19 08:51:02.473597 | orchestrator | Wednesday 19 February 2025 08:50:23 +0000 (0:00:02.877) 0:00:46.008 **** 2025-02-19 08:51:02.473613 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.473629 | orchestrator | 2025-02-19 08:51:02.473663 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-02-19 08:51:02.473685 | orchestrator | Wednesday 19 February 2025 08:50:24 +0000 (0:00:01.180) 0:00:47.189 **** 2025-02-19 08:51:02.473699 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.473714 | orchestrator | 2025-02-19 08:51:02.473729 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-02-19 08:51:02.473744 | orchestrator | Wednesday 19 February 2025 08:50:25 +0000 (0:00:01.085) 0:00:48.275 **** 2025-02-19 08:51:02.473760 | orchestrator | ok: [testbed-manager] 2025-02-19 08:51:02.473775 | orchestrator | 2025-02-19 08:51:02.473790 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:51:02.473805 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:51:02.473821 | orchestrator | 2025-02-19 08:51:02.473834 | orchestrator | 2025-02-19 08:51:02.473848 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:51:02.473862 | orchestrator | Wednesday 19 February 2025 08:50:26 +0000 (0:00:00.601) 0:00:48.876 **** 2025-02-19 08:51:02.473875 | orchestrator | =============================================================================== 2025-02-19 08:51:02.473890 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.61s 2025-02-19 08:51:02.473905 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.88s 2025-02-19 08:51:02.473920 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.80s 2025-02-19 08:51:02.473934 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.56s 2025-02-19 08:51:02.473948 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.25s 2025-02-19 08:51:02.473962 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.18s 2025-02-19 08:51:02.473976 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.10s 2025-02-19 08:51:02.473990 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.09s 2025-02-19 08:51:02.474004 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.60s 2025-02-19 08:51:02.474097 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.39s 2025-02-19 08:51:02.474114 | orchestrator | 2025-02-19 08:51:02.474129 | orchestrator | 2025-02-19 08:51:02.474144 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 08:51:02.474159 | orchestrator | 2025-02-19 08:51:02.474174 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 08:51:02.474189 | orchestrator | Wednesday 19 February 2025 08:49:37 +0000 (0:00:00.413) 0:00:00.413 **** 2025-02-19 08:51:02.474204 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-02-19 08:51:02.474219 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-02-19 08:51:02.474234 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-02-19 08:51:02.474249 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-02-19 08:51:02.474264 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-02-19 08:51:02.474280 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-02-19 08:51:02.474295 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-02-19 08:51:02.474310 | orchestrator | 2025-02-19 08:51:02.474324 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-02-19 08:51:02.474340 | orchestrator | 2025-02-19 08:51:02.474365 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-02-19 08:51:02.474380 | orchestrator | Wednesday 19 February 2025 08:49:38 +0000 (0:00:01.709) 0:00:02.122 **** 2025-02-19 08:51:02.474407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:51:02.474425 | orchestrator | 2025-02-19 08:51:02.474440 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-02-19 08:51:02.474455 | orchestrator | Wednesday 19 February 2025 08:49:40 +0000 (0:00:02.174) 0:00:04.297 **** 2025-02-19 08:51:02.474470 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:51:02.474484 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:51:02.474498 | orchestrator | ok: [testbed-manager] 2025-02-19 08:51:02.474511 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:51:02.474526 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:51:02.474553 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:51:02.474568 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:51:02.474583 | orchestrator | 2025-02-19 08:51:02.474596 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-02-19 08:51:02.474610 | orchestrator | Wednesday 19 February 2025 08:49:44 +0000 (0:00:03.173) 0:00:07.470 **** 2025-02-19 08:51:02.474623 | orchestrator | ok: [testbed-manager] 2025-02-19 08:51:02.474687 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:51:02.474704 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:51:02.474719 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:51:02.474735 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:51:02.474750 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:51:02.474765 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:51:02.474787 | orchestrator | 2025-02-19 08:51:02.474803 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-02-19 08:51:02.474819 | orchestrator | Wednesday 19 February 2025 08:49:49 +0000 (0:00:05.352) 0:00:12.823 **** 2025-02-19 08:51:02.474834 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.474849 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:51:02.474864 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:51:02.474879 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:51:02.474895 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:51:02.474909 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:51:02.474924 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:51:02.474939 | orchestrator | 2025-02-19 08:51:02.474954 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-02-19 08:51:02.474969 | orchestrator | Wednesday 19 February 2025 08:49:52 +0000 (0:00:02.908) 0:00:15.732 **** 2025-02-19 08:51:02.474984 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:51:02.474999 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:51:02.475015 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:51:02.475030 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:51:02.475044 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:51:02.475057 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.475071 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:51:02.475085 | orchestrator | 2025-02-19 08:51:02.475109 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-02-19 08:51:02.475125 | orchestrator | Wednesday 19 February 2025 08:50:01 +0000 (0:00:08.981) 0:00:24.714 **** 2025-02-19 08:51:02.475141 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:51:02.475156 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:51:02.475171 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:51:02.475186 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:51:02.475202 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:51:02.475217 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:51:02.475232 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.475247 | orchestrator | 2025-02-19 08:51:02.475262 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-02-19 08:51:02.475285 | orchestrator | Wednesday 19 February 2025 08:50:21 +0000 (0:00:19.968) 0:00:44.682 **** 2025-02-19 08:51:02.475302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:51:02.475321 | orchestrator | 2025-02-19 08:51:02.475337 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-02-19 08:51:02.475352 | orchestrator | Wednesday 19 February 2025 08:50:24 +0000 (0:00:03.537) 0:00:48.220 **** 2025-02-19 08:51:02.475368 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-02-19 08:51:02.475383 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-02-19 08:51:02.475398 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-02-19 08:51:02.475413 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-02-19 08:51:02.475429 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-02-19 08:51:02.475444 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-02-19 08:51:02.475460 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-02-19 08:51:02.475475 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-02-19 08:51:02.475490 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-02-19 08:51:02.475505 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-02-19 08:51:02.475520 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-02-19 08:51:02.475535 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-02-19 08:51:02.475549 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-02-19 08:51:02.475564 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-02-19 08:51:02.475580 | orchestrator | 2025-02-19 08:51:02.475595 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-02-19 08:51:02.475611 | orchestrator | Wednesday 19 February 2025 08:50:35 +0000 (0:00:11.124) 0:00:59.344 **** 2025-02-19 08:51:02.475626 | orchestrator | ok: [testbed-manager] 2025-02-19 08:51:02.475658 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:51:02.475674 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:51:02.475689 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:51:02.475705 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:51:02.475718 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:51:02.475732 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:51:02.475747 | orchestrator | 2025-02-19 08:51:02.475762 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-02-19 08:51:02.475777 | orchestrator | Wednesday 19 February 2025 08:50:38 +0000 (0:00:02.471) 0:01:01.816 **** 2025-02-19 08:51:02.475791 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:51:02.475805 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.475818 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:51:02.475831 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:51:02.475845 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:51:02.475861 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:51:02.475875 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:51:02.475889 | orchestrator | 2025-02-19 08:51:02.475904 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-02-19 08:51:02.475930 | orchestrator | Wednesday 19 February 2025 08:50:42 +0000 (0:00:04.002) 0:01:05.819 **** 2025-02-19 08:51:02.475944 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:51:02.475958 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:51:02.475972 | orchestrator | ok: [testbed-manager] 2025-02-19 08:51:02.475987 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:51:02.476001 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:51:02.476015 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:51:02.476029 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:51:02.476043 | orchestrator | 2025-02-19 08:51:02.476058 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-02-19 08:51:02.476080 | orchestrator | Wednesday 19 February 2025 08:50:45 +0000 (0:00:03.274) 0:01:09.093 **** 2025-02-19 08:51:02.476095 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:51:02.476109 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:51:02.476123 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:51:02.476137 | orchestrator | ok: [testbed-manager] 2025-02-19 08:51:02.476150 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:51:02.476164 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:51:02.476178 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:51:02.476191 | orchestrator | 2025-02-19 08:51:02.476204 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-02-19 08:51:02.476218 | orchestrator | Wednesday 19 February 2025 08:50:49 +0000 (0:00:03.956) 0:01:13.055 **** 2025-02-19 08:51:02.476233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-02-19 08:51:02.476249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:51:02.476264 | orchestrator | 2025-02-19 08:51:02.476278 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-02-19 08:51:02.476292 | orchestrator | Wednesday 19 February 2025 08:50:53 +0000 (0:00:03.530) 0:01:16.586 **** 2025-02-19 08:51:02.476307 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.476321 | orchestrator | 2025-02-19 08:51:02.476336 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-02-19 08:51:02.476350 | orchestrator | Wednesday 19 February 2025 08:50:57 +0000 (0:00:04.190) 0:01:20.776 **** 2025-02-19 08:51:02.476364 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:51:02.476379 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:51:02.476402 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:51:02.476418 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:51:02.476432 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:51:02.476447 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:51:02.476461 | orchestrator | changed: [testbed-manager] 2025-02-19 08:51:02.476475 | orchestrator | 2025-02-19 08:51:02.476489 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:51:02.476504 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:51:02.476519 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:51:02.476534 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:51:02.476553 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:51:02.476567 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:51:02.476582 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:51:02.476596 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:51:02.476610 | orchestrator | 2025-02-19 08:51:02.476625 | orchestrator | 2025-02-19 08:51:02.476691 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:51:02.476708 | orchestrator | Wednesday 19 February 2025 08:51:00 +0000 (0:00:03.419) 0:01:24.196 **** 2025-02-19 08:51:02.476722 | orchestrator | =============================================================================== 2025-02-19 08:51:02.476737 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.97s 2025-02-19 08:51:02.476759 | orchestrator | osism.services.netdata : Copy configuration files ---------------------- 11.12s 2025-02-19 08:51:02.476773 | orchestrator | osism.services.netdata : Add repository --------------------------------- 8.98s 2025-02-19 08:51:02.476788 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 5.35s 2025-02-19 08:51:02.476802 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 4.19s 2025-02-19 08:51:02.476816 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 4.00s 2025-02-19 08:51:02.476831 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.96s 2025-02-19 08:51:02.476845 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 3.54s 2025-02-19 08:51:02.476860 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 3.53s 2025-02-19 08:51:02.476878 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.42s 2025-02-19 08:51:02.476893 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 3.27s 2025-02-19 08:51:02.476919 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.17s 2025-02-19 08:51:02.477222 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.91s 2025-02-19 08:51:02.477325 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.47s 2025-02-19 08:51:02.477353 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.17s 2025-02-19 08:51:02.477374 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.71s 2025-02-19 08:51:02.477389 | orchestrator | 2025-02-19 08:51:02 | INFO  | Task fe2fe279-13e0-446d-a186-85f1e42d3020 is in state SUCCESS 2025-02-19 08:51:02.477401 | orchestrator | 2025-02-19 08:51:02 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:02.477413 | orchestrator | 2025-02-19 08:51:02 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:02.477450 | orchestrator | 2025-02-19 08:51:02 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:02.483906 | orchestrator | 2025-02-19 08:51:02 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:51:02.484052 | orchestrator | 2025-02-19 08:51:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:05.552794 | orchestrator | 2025-02-19 08:51:05 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:05.559049 | orchestrator | 2025-02-19 08:51:05 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:05.562259 | orchestrator | 2025-02-19 08:51:05 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:05.573775 | orchestrator | 2025-02-19 08:51:05 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state STARTED 2025-02-19 08:51:08.624550 | orchestrator | 2025-02-19 08:51:05 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:08.624767 | orchestrator | 2025-02-19 08:51:08 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:08.627853 | orchestrator | 2025-02-19 08:51:08 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:08.629834 | orchestrator | 2025-02-19 08:51:08 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:11.720458 | orchestrator | 2025-02-19 08:51:08 | INFO  | Task 5bf7e51f-92d5-43e8-8ca0-4dcd183114e3 is in state SUCCESS 2025-02-19 08:51:11.720569 | orchestrator | 2025-02-19 08:51:08 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:11.720597 | orchestrator | 2025-02-19 08:51:11 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:14.758611 | orchestrator | 2025-02-19 08:51:11 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:14.758797 | orchestrator | 2025-02-19 08:51:11 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:14.758819 | orchestrator | 2025-02-19 08:51:11 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:14.758854 | orchestrator | 2025-02-19 08:51:14 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:14.761773 | orchestrator | 2025-02-19 08:51:14 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:14.761883 | orchestrator | 2025-02-19 08:51:14 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:14.762273 | orchestrator | 2025-02-19 08:51:14 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:17.805536 | orchestrator | 2025-02-19 08:51:17 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:20.889928 | orchestrator | 2025-02-19 08:51:17 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:20.890134 | orchestrator | 2025-02-19 08:51:17 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:20.890159 | orchestrator | 2025-02-19 08:51:17 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:20.890192 | orchestrator | 2025-02-19 08:51:20 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:20.896024 | orchestrator | 2025-02-19 08:51:20 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:20.898663 | orchestrator | 2025-02-19 08:51:20 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:23.975460 | orchestrator | 2025-02-19 08:51:20 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:23.975607 | orchestrator | 2025-02-19 08:51:23 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:27.044687 | orchestrator | 2025-02-19 08:51:23 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:27.044814 | orchestrator | 2025-02-19 08:51:23 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:27.044836 | orchestrator | 2025-02-19 08:51:23 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:27.044870 | orchestrator | 2025-02-19 08:51:27 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:27.049025 | orchestrator | 2025-02-19 08:51:27 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:27.049704 | orchestrator | 2025-02-19 08:51:27 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:30.103366 | orchestrator | 2025-02-19 08:51:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:30.103519 | orchestrator | 2025-02-19 08:51:30 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:33.153976 | orchestrator | 2025-02-19 08:51:30 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:33.154091 | orchestrator | 2025-02-19 08:51:30 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:33.154101 | orchestrator | 2025-02-19 08:51:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:33.154120 | orchestrator | 2025-02-19 08:51:33 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:33.154183 | orchestrator | 2025-02-19 08:51:33 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:33.156166 | orchestrator | 2025-02-19 08:51:33 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:33.156677 | orchestrator | 2025-02-19 08:51:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:36.245351 | orchestrator | 2025-02-19 08:51:36 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:36.249420 | orchestrator | 2025-02-19 08:51:36 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:39.312938 | orchestrator | 2025-02-19 08:51:36 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:39.313050 | orchestrator | 2025-02-19 08:51:36 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:39.313086 | orchestrator | 2025-02-19 08:51:39 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:39.313165 | orchestrator | 2025-02-19 08:51:39 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:39.315078 | orchestrator | 2025-02-19 08:51:39 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:42.359118 | orchestrator | 2025-02-19 08:51:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:42.359278 | orchestrator | 2025-02-19 08:51:42 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:42.361260 | orchestrator | 2025-02-19 08:51:42 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:42.363779 | orchestrator | 2025-02-19 08:51:42 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:45.412860 | orchestrator | 2025-02-19 08:51:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:45.413025 | orchestrator | 2025-02-19 08:51:45 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:45.414532 | orchestrator | 2025-02-19 08:51:45 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:48.466241 | orchestrator | 2025-02-19 08:51:45 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:48.466375 | orchestrator | 2025-02-19 08:51:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:48.466412 | orchestrator | 2025-02-19 08:51:48 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:48.466781 | orchestrator | 2025-02-19 08:51:48 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:48.467815 | orchestrator | 2025-02-19 08:51:48 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:48.467997 | orchestrator | 2025-02-19 08:51:48 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:51.528540 | orchestrator | 2025-02-19 08:51:51 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:54.578589 | orchestrator | 2025-02-19 08:51:51 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:54.578804 | orchestrator | 2025-02-19 08:51:51 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:54.578830 | orchestrator | 2025-02-19 08:51:51 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:54.578865 | orchestrator | 2025-02-19 08:51:54 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:54.579427 | orchestrator | 2025-02-19 08:51:54 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:54.579489 | orchestrator | 2025-02-19 08:51:54 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:51:57.625219 | orchestrator | 2025-02-19 08:51:54 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:51:57.625359 | orchestrator | 2025-02-19 08:51:57 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:51:57.626742 | orchestrator | 2025-02-19 08:51:57 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:51:57.629040 | orchestrator | 2025-02-19 08:51:57 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:00.686122 | orchestrator | 2025-02-19 08:51:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:00.686270 | orchestrator | 2025-02-19 08:52:00 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:00.686450 | orchestrator | 2025-02-19 08:52:00 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state STARTED 2025-02-19 08:52:00.688468 | orchestrator | 2025-02-19 08:52:00 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:03.735366 | orchestrator | 2025-02-19 08:52:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:03.735523 | orchestrator | 2025-02-19 08:52:03 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:03.738400 | orchestrator | 2025-02-19 08:52:03 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:03.742919 | orchestrator | 2025-02-19 08:52:03 | INFO  | Task 996d35b1-2e15-474a-9bdc-81d3d8813cf5 is in state SUCCESS 2025-02-19 08:52:03.746763 | orchestrator | 2025-02-19 08:52:03.746855 | orchestrator | 2025-02-19 08:52:03.746875 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-02-19 08:52:03.746899 | orchestrator | 2025-02-19 08:52:03.746914 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-02-19 08:52:03.746929 | orchestrator | Wednesday 19 February 2025 08:50:03 +0000 (0:00:01.188) 0:00:01.188 **** 2025-02-19 08:52:03.746943 | orchestrator | ok: [testbed-manager] 2025-02-19 08:52:03.746958 | orchestrator | 2025-02-19 08:52:03.746972 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-02-19 08:52:03.746987 | orchestrator | Wednesday 19 February 2025 08:50:05 +0000 (0:00:01.316) 0:00:02.505 **** 2025-02-19 08:52:03.747002 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-02-19 08:52:03.747016 | orchestrator | 2025-02-19 08:52:03.747030 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-02-19 08:52:03.747044 | orchestrator | Wednesday 19 February 2025 08:50:05 +0000 (0:00:00.708) 0:00:03.213 **** 2025-02-19 08:52:03.747058 | orchestrator | changed: [testbed-manager] 2025-02-19 08:52:03.747073 | orchestrator | 2025-02-19 08:52:03.747087 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-02-19 08:52:03.747101 | orchestrator | Wednesday 19 February 2025 08:50:07 +0000 (0:00:01.915) 0:00:05.128 **** 2025-02-19 08:52:03.747114 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-02-19 08:52:03.747129 | orchestrator | ok: [testbed-manager] 2025-02-19 08:52:03.747143 | orchestrator | 2025-02-19 08:52:03.747157 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-02-19 08:52:03.747171 | orchestrator | Wednesday 19 February 2025 08:51:01 +0000 (0:00:53.270) 0:00:58.399 **** 2025-02-19 08:52:03.747185 | orchestrator | changed: [testbed-manager] 2025-02-19 08:52:03.747199 | orchestrator | 2025-02-19 08:52:03.747213 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:52:03.747227 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:52:03.747264 | orchestrator | 2025-02-19 08:52:03.747279 | orchestrator | 2025-02-19 08:52:03.747292 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:52:03.747307 | orchestrator | Wednesday 19 February 2025 08:51:05 +0000 (0:00:04.463) 0:01:02.862 **** 2025-02-19 08:52:03.747321 | orchestrator | =============================================================================== 2025-02-19 08:52:03.747335 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 53.27s 2025-02-19 08:52:03.747349 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.46s 2025-02-19 08:52:03.747362 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.92s 2025-02-19 08:52:03.747376 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.32s 2025-02-19 08:52:03.747390 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.71s 2025-02-19 08:52:03.747404 | orchestrator | 2025-02-19 08:52:03.747418 | orchestrator | 2025-02-19 08:52:03.747432 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-02-19 08:52:03.747446 | orchestrator | 2025-02-19 08:52:03.747460 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-02-19 08:52:03.747473 | orchestrator | Wednesday 19 February 2025 08:49:32 +0000 (0:00:00.380) 0:00:00.380 **** 2025-02-19 08:52:03.747487 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:52:03.747502 | orchestrator | 2025-02-19 08:52:03.747516 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-02-19 08:52:03.747530 | orchestrator | Wednesday 19 February 2025 08:49:33 +0000 (0:00:01.822) 0:00:02.203 **** 2025-02-19 08:52:03.747548 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-19 08:52:03.747562 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-19 08:52:03.747576 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-19 08:52:03.747590 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-19 08:52:03.747603 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-19 08:52:03.747617 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-19 08:52:03.747664 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-19 08:52:03.747680 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-19 08:52:03.747695 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-19 08:52:03.747709 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-19 08:52:03.747723 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-19 08:52:03.747738 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-19 08:52:03.747752 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-19 08:52:03.747766 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-19 08:52:03.747780 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-19 08:52:03.747794 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-19 08:52:03.747820 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-19 08:52:03.747835 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-19 08:52:03.747849 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-19 08:52:03.747871 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-19 08:52:03.747885 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-19 08:52:03.747899 | orchestrator | 2025-02-19 08:52:03.747912 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-02-19 08:52:03.747926 | orchestrator | Wednesday 19 February 2025 08:49:38 +0000 (0:00:04.197) 0:00:06.400 **** 2025-02-19 08:52:03.747941 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:52:03.747961 | orchestrator | 2025-02-19 08:52:03.747975 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-02-19 08:52:03.747989 | orchestrator | Wednesday 19 February 2025 08:49:39 +0000 (0:00:01.769) 0:00:08.169 **** 2025-02-19 08:52:03.748006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.748024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.748039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.748054 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.748068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.748082 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.748110 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.748126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748170 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748205 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748341 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.748362 | orchestrator | 2025-02-19 08:52:03.748376 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-02-19 08:52:03.748390 | orchestrator | Wednesday 19 February 2025 08:49:44 +0000 (0:00:04.908) 0:00:13.078 **** 2025-02-19 08:52:03.748417 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.748433 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748447 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748462 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:52:03.748476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.748491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.748520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.748594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748624 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:52:03.748707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.748733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748789 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:52:03.748812 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:52:03.748835 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:52:03.748858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.748893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.748943 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:52:03.748967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.748990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749029 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:52:03.749043 | orchestrator | 2025-02-19 08:52:03.749057 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-02-19 08:52:03.749071 | orchestrator | Wednesday 19 February 2025 08:49:47 +0000 (0:00:02.793) 0:00:15.871 **** 2025-02-19 08:52:03.749086 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.749100 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749131 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.749485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749514 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:52:03.749528 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:52:03.749542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.749568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.749622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.749703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749739 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:52:03.749754 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:52:03.749768 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:52:03.749790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.749804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749852 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:52:03.749866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-19 08:52:03.749881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.749910 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:52:03.749931 | orchestrator | 2025-02-19 08:52:03.749945 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-02-19 08:52:03.749960 | orchestrator | Wednesday 19 February 2025 08:49:51 +0000 (0:00:03.395) 0:00:19.267 **** 2025-02-19 08:52:03.749974 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:52:03.749988 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:52:03.750001 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:52:03.750015 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:52:03.750085 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:52:03.750109 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:52:03.750134 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:52:03.750159 | orchestrator | 2025-02-19 08:52:03.750183 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-02-19 08:52:03.750208 | orchestrator | Wednesday 19 February 2025 08:49:52 +0000 (0:00:01.145) 0:00:20.412 **** 2025-02-19 08:52:03.750237 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:52:03.750260 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:52:03.750276 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:52:03.750289 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:52:03.750303 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:52:03.750317 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:52:03.750331 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:52:03.750344 | orchestrator | 2025-02-19 08:52:03.750358 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-02-19 08:52:03.750372 | orchestrator | Wednesday 19 February 2025 08:49:53 +0000 (0:00:00.963) 0:00:21.376 **** 2025-02-19 08:52:03.750386 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:52:03.750400 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:03.750414 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:03.750428 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:52:03.750441 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:52:03.750455 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:52:03.750468 | orchestrator | changed: [testbed-manager] 2025-02-19 08:52:03.750482 | orchestrator | 2025-02-19 08:52:03.750496 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-02-19 08:52:03.750510 | orchestrator | Wednesday 19 February 2025 08:50:21 +0000 (0:00:27.967) 0:00:49.344 **** 2025-02-19 08:52:03.750524 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:52:03.750538 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:52:03.750551 | orchestrator | ok: [testbed-manager] 2025-02-19 08:52:03.750565 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:52:03.750579 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:52:03.750593 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:52:03.750615 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:52:03.750657 | orchestrator | 2025-02-19 08:52:03.750674 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-02-19 08:52:03.750689 | orchestrator | Wednesday 19 February 2025 08:50:24 +0000 (0:00:03.779) 0:00:53.124 **** 2025-02-19 08:52:03.750703 | orchestrator | ok: [testbed-manager] 2025-02-19 08:52:03.750716 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:52:03.750730 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:52:03.750744 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:52:03.750757 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:52:03.750771 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:52:03.750785 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:52:03.750799 | orchestrator | 2025-02-19 08:52:03.750813 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-02-19 08:52:03.750827 | orchestrator | Wednesday 19 February 2025 08:50:26 +0000 (0:00:01.742) 0:00:54.866 **** 2025-02-19 08:52:03.750841 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:52:03.750855 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:52:03.750869 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:52:03.750883 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:52:03.750904 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:52:03.750928 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:52:03.750941 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:52:03.750955 | orchestrator | 2025-02-19 08:52:03.750969 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-02-19 08:52:03.750983 | orchestrator | Wednesday 19 February 2025 08:50:27 +0000 (0:00:01.312) 0:00:56.179 **** 2025-02-19 08:52:03.750996 | orchestrator | skipping: [testbed-manager] 2025-02-19 08:52:03.751010 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:52:03.751024 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:52:03.751037 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:52:03.751051 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:52:03.751064 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:52:03.751078 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:52:03.751092 | orchestrator | 2025-02-19 08:52:03.751106 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-02-19 08:52:03.751120 | orchestrator | Wednesday 19 February 2025 08:50:29 +0000 (0:00:01.454) 0:00:57.633 **** 2025-02-19 08:52:03.751134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.751149 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.751164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.751183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751199 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.751243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.751258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751278 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.751392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751484 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.751511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751539 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751607 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.751689 | orchestrator | 2025-02-19 08:52:03.751708 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-02-19 08:52:03.751722 | orchestrator | Wednesday 19 February 2025 08:50:37 +0000 (0:00:08.073) 0:01:05.707 **** 2025-02-19 08:52:03.751737 | orchestrator | [WARNING]: Skipped 2025-02-19 08:52:03.751751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-02-19 08:52:03.751764 | orchestrator | to this access issue: 2025-02-19 08:52:03.751778 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-02-19 08:52:03.751792 | orchestrator | directory 2025-02-19 08:52:03.751806 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-19 08:52:03.751819 | orchestrator | 2025-02-19 08:52:03.751833 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-02-19 08:52:03.751847 | orchestrator | Wednesday 19 February 2025 08:50:38 +0000 (0:00:01.216) 0:01:06.923 **** 2025-02-19 08:52:03.751861 | orchestrator | [WARNING]: Skipped 2025-02-19 08:52:03.751882 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-02-19 08:52:03.751896 | orchestrator | to this access issue: 2025-02-19 08:52:03.751910 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-02-19 08:52:03.751923 | orchestrator | directory 2025-02-19 08:52:03.751937 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-19 08:52:03.751951 | orchestrator | 2025-02-19 08:52:03.751965 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-02-19 08:52:03.751979 | orchestrator | Wednesday 19 February 2025 08:50:40 +0000 (0:00:01.379) 0:01:08.302 **** 2025-02-19 08:52:03.751992 | orchestrator | [WARNING]: Skipped 2025-02-19 08:52:03.752006 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-02-19 08:52:03.752020 | orchestrator | to this access issue: 2025-02-19 08:52:03.752033 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-02-19 08:52:03.752047 | orchestrator | directory 2025-02-19 08:52:03.752061 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-19 08:52:03.752075 | orchestrator | 2025-02-19 08:52:03.752089 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-02-19 08:52:03.752102 | orchestrator | Wednesday 19 February 2025 08:50:42 +0000 (0:00:02.009) 0:01:10.312 **** 2025-02-19 08:52:03.752116 | orchestrator | [WARNING]: Skipped 2025-02-19 08:52:03.752130 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-02-19 08:52:03.752144 | orchestrator | to this access issue: 2025-02-19 08:52:03.752157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-02-19 08:52:03.752171 | orchestrator | directory 2025-02-19 08:52:03.752185 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-19 08:52:03.752199 | orchestrator | 2025-02-19 08:52:03.752213 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-02-19 08:52:03.752225 | orchestrator | Wednesday 19 February 2025 08:50:42 +0000 (0:00:00.821) 0:01:11.133 **** 2025-02-19 08:52:03.752237 | orchestrator | changed: [testbed-manager] 2025-02-19 08:52:03.752249 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:52:03.752261 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:03.752273 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:03.752286 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:52:03.752298 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:52:03.752310 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:52:03.752322 | orchestrator | 2025-02-19 08:52:03.752334 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-02-19 08:52:03.752346 | orchestrator | Wednesday 19 February 2025 08:50:50 +0000 (0:00:07.634) 0:01:18.768 **** 2025-02-19 08:52:03.752365 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-19 08:52:03.752378 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-19 08:52:03.752390 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-19 08:52:03.752403 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-19 08:52:03.752415 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-19 08:52:03.752427 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-19 08:52:03.752440 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-19 08:52:03.752452 | orchestrator | 2025-02-19 08:52:03.752464 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-02-19 08:52:03.752476 | orchestrator | Wednesday 19 February 2025 08:50:55 +0000 (0:00:04.664) 0:01:23.432 **** 2025-02-19 08:52:03.752489 | orchestrator | changed: [testbed-manager] 2025-02-19 08:52:03.752501 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:52:03.752514 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:03.752526 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:03.752538 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:52:03.752550 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:52:03.752562 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:52:03.752574 | orchestrator | 2025-02-19 08:52:03.752587 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-02-19 08:52:03.752599 | orchestrator | Wednesday 19 February 2025 08:50:58 +0000 (0:00:03.671) 0:01:27.104 **** 2025-02-19 08:52:03.752612 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.752646 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.752661 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.752679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.752698 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.752711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.752724 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.752737 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.752750 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.752769 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.752782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.752795 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.752818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.752835 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.752848 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.752861 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.752874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.752893 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.752906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 08:52:03.752924 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.752937 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.752950 | orchestrator | 2025-02-19 08:52:03.752962 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-02-19 08:52:03.752979 | orchestrator | Wednesday 19 February 2025 08:51:02 +0000 (0:00:04.004) 0:01:31.108 **** 2025-02-19 08:52:03.752992 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-19 08:52:03.753004 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-19 08:52:03.753017 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-19 08:52:03.753029 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-19 08:52:03.753041 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-19 08:52:03.753054 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-19 08:52:03.753111 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-19 08:52:03.753124 | orchestrator | 2025-02-19 08:52:03.753137 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-02-19 08:52:03.753149 | orchestrator | Wednesday 19 February 2025 08:51:06 +0000 (0:00:03.769) 0:01:34.878 **** 2025-02-19 08:52:03.753162 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-19 08:52:03.753174 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-19 08:52:03.753187 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-19 08:52:03.753199 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-19 08:52:03.753211 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-19 08:52:03.753223 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-19 08:52:03.753235 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-19 08:52:03.753247 | orchestrator | 2025-02-19 08:52:03.753260 | orchestrator | TASK [common : Check common containers] **************************************** 2025-02-19 08:52:03.753272 | orchestrator | Wednesday 19 February 2025 08:51:09 +0000 (0:00:03.153) 0:01:38.032 **** 2025-02-19 08:52:03.753286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.753310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.753324 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.753337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.753351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.753389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.753427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-19 08:52:03.753452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753478 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753543 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753555 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753585 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:52:03.753598 | orchestrator | 2025-02-19 08:52:03.753611 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-02-19 08:52:03.753624 | orchestrator | Wednesday 19 February 2025 08:51:13 +0000 (0:00:04.076) 0:01:42.109 **** 2025-02-19 08:52:03.753651 | orchestrator | changed: [testbed-manager] 2025-02-19 08:52:03.753664 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:52:03.753676 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:03.753689 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:03.753701 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:52:03.753718 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:52:03.753738 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:52:03.753765 | orchestrator | 2025-02-19 08:52:03.753788 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-02-19 08:52:03.753807 | orchestrator | Wednesday 19 February 2025 08:51:15 +0000 (0:00:02.041) 0:01:44.150 **** 2025-02-19 08:52:03.753827 | orchestrator | changed: [testbed-manager] 2025-02-19 08:52:03.753847 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:52:03.753867 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:03.753888 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:03.753908 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:52:03.753942 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:52:03.753963 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:52:03.753980 | orchestrator | 2025-02-19 08:52:03.753993 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-19 08:52:03.754005 | orchestrator | Wednesday 19 February 2025 08:51:17 +0000 (0:00:01.965) 0:01:46.116 **** 2025-02-19 08:52:03.754058 | orchestrator | 2025-02-19 08:52:03.754074 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-19 08:52:03.754087 | orchestrator | Wednesday 19 February 2025 08:51:17 +0000 (0:00:00.063) 0:01:46.180 **** 2025-02-19 08:52:03.754099 | orchestrator | 2025-02-19 08:52:03.754111 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-19 08:52:03.754124 | orchestrator | Wednesday 19 February 2025 08:51:18 +0000 (0:00:00.070) 0:01:46.251 **** 2025-02-19 08:52:03.754136 | orchestrator | 2025-02-19 08:52:03.754149 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-19 08:52:03.754161 | orchestrator | Wednesday 19 February 2025 08:51:18 +0000 (0:00:00.319) 0:01:46.570 **** 2025-02-19 08:52:03.754173 | orchestrator | 2025-02-19 08:52:03.754185 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-19 08:52:03.754197 | orchestrator | Wednesday 19 February 2025 08:51:18 +0000 (0:00:00.080) 0:01:46.651 **** 2025-02-19 08:52:03.754210 | orchestrator | 2025-02-19 08:52:03.754222 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-19 08:52:03.754234 | orchestrator | Wednesday 19 February 2025 08:51:18 +0000 (0:00:00.079) 0:01:46.730 **** 2025-02-19 08:52:03.754246 | orchestrator | 2025-02-19 08:52:03.754259 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-19 08:52:03.754271 | orchestrator | Wednesday 19 February 2025 08:51:18 +0000 (0:00:00.110) 0:01:46.841 **** 2025-02-19 08:52:03.754283 | orchestrator | 2025-02-19 08:52:03.754295 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-02-19 08:52:03.754317 | orchestrator | Wednesday 19 February 2025 08:51:19 +0000 (0:00:00.423) 0:01:47.265 **** 2025-02-19 08:52:03.754329 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:03.754342 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:52:03.754354 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:03.754367 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:52:03.754379 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:52:03.754391 | orchestrator | changed: [testbed-manager] 2025-02-19 08:52:03.754403 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:52:03.754416 | orchestrator | 2025-02-19 08:52:03.754434 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-02-19 08:52:03.754447 | orchestrator | Wednesday 19 February 2025 08:51:30 +0000 (0:00:11.644) 0:01:58.909 **** 2025-02-19 08:52:03.754459 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:52:03.754472 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:03.754484 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:03.754496 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:52:03.754509 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:52:03.754521 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:52:03.754534 | orchestrator | changed: [testbed-manager] 2025-02-19 08:52:03.754546 | orchestrator | 2025-02-19 08:52:03.754559 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-02-19 08:52:03.754571 | orchestrator | Wednesday 19 February 2025 08:51:51 +0000 (0:00:20.580) 0:02:19.489 **** 2025-02-19 08:52:03.754583 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:52:03.754595 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:52:03.754608 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:52:03.754620 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:52:03.754654 | orchestrator | ok: [testbed-manager] 2025-02-19 08:52:03.754677 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:52:03.754691 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:52:03.754703 | orchestrator | 2025-02-19 08:52:03.754716 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-02-19 08:52:03.754735 | orchestrator | Wednesday 19 February 2025 08:51:54 +0000 (0:00:03.697) 0:02:23.187 **** 2025-02-19 08:52:03.754748 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:52:03.754760 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:03.754773 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:03.754785 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:52:03.754797 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:52:03.754809 | orchestrator | changed: [testbed-manager] 2025-02-19 08:52:03.754821 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:52:03.754834 | orchestrator | 2025-02-19 08:52:03.754846 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:52:03.754860 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:52:03.754873 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:52:03.754886 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:52:03.754898 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:52:03.754911 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:52:03.754923 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:52:03.754935 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 08:52:03.754948 | orchestrator | 2025-02-19 08:52:03.754960 | orchestrator | 2025-02-19 08:52:03.754972 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:52:03.754985 | orchestrator | Wednesday 19 February 2025 08:52:02 +0000 (0:00:07.097) 0:02:30.284 **** 2025-02-19 08:52:03.754997 | orchestrator | =============================================================================== 2025-02-19 08:52:03.755010 | orchestrator | common : Ensure fluentd image is present for label check --------------- 27.97s 2025-02-19 08:52:03.755022 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 20.58s 2025-02-19 08:52:03.755034 | orchestrator | common : Restart fluentd container ------------------------------------- 11.64s 2025-02-19 08:52:03.755047 | orchestrator | common : Copying over config.json files for services -------------------- 8.07s 2025-02-19 08:52:03.755059 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 7.63s 2025-02-19 08:52:03.755071 | orchestrator | common : Restart cron container ----------------------------------------- 7.10s 2025-02-19 08:52:03.755084 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.91s 2025-02-19 08:52:03.755096 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.66s 2025-02-19 08:52:03.755108 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.20s 2025-02-19 08:52:03.755120 | orchestrator | common : Check common containers ---------------------------------------- 4.08s 2025-02-19 08:52:03.755133 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.00s 2025-02-19 08:52:03.755146 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 3.78s 2025-02-19 08:52:03.755159 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.77s 2025-02-19 08:52:03.755176 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.70s 2025-02-19 08:52:03.762422 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.67s 2025-02-19 08:52:03.762547 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.40s 2025-02-19 08:52:03.762568 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.15s 2025-02-19 08:52:03.762583 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.79s 2025-02-19 08:52:03.762598 | orchestrator | common : Creating log volume -------------------------------------------- 2.04s 2025-02-19 08:52:03.762711 | orchestrator | common : Find custom fluentd format config files ------------------------ 2.01s 2025-02-19 08:52:03.762735 | orchestrator | 2025-02-19 08:52:03 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:03.762751 | orchestrator | 2025-02-19 08:52:03 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:03.762765 | orchestrator | 2025-02-19 08:52:03 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:03.762794 | orchestrator | 2025-02-19 08:52:03 | INFO  | Task 1669bfd4-951b-4d0d-a3d2-5c3cf40b1f32 is in state STARTED 2025-02-19 08:52:06.805116 | orchestrator | 2025-02-19 08:52:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:06.805246 | orchestrator | 2025-02-19 08:52:06 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:06.805929 | orchestrator | 2025-02-19 08:52:06 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:06.805960 | orchestrator | 2025-02-19 08:52:06 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:06.807288 | orchestrator | 2025-02-19 08:52:06 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:06.808520 | orchestrator | 2025-02-19 08:52:06 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:06.811620 | orchestrator | 2025-02-19 08:52:06 | INFO  | Task 1669bfd4-951b-4d0d-a3d2-5c3cf40b1f32 is in state STARTED 2025-02-19 08:52:09.862244 | orchestrator | 2025-02-19 08:52:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:09.862358 | orchestrator | 2025-02-19 08:52:09 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:09.863170 | orchestrator | 2025-02-19 08:52:09 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:09.866818 | orchestrator | 2025-02-19 08:52:09 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:09.868482 | orchestrator | 2025-02-19 08:52:09 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:09.869609 | orchestrator | 2025-02-19 08:52:09 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:09.871127 | orchestrator | 2025-02-19 08:52:09 | INFO  | Task 1669bfd4-951b-4d0d-a3d2-5c3cf40b1f32 is in state STARTED 2025-02-19 08:52:12.924559 | orchestrator | 2025-02-19 08:52:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:12.924741 | orchestrator | 2025-02-19 08:52:12 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:12.927024 | orchestrator | 2025-02-19 08:52:12 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:12.927725 | orchestrator | 2025-02-19 08:52:12 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:12.933137 | orchestrator | 2025-02-19 08:52:12 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:15.987370 | orchestrator | 2025-02-19 08:52:12 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:15.987532 | orchestrator | 2025-02-19 08:52:12 | INFO  | Task 1669bfd4-951b-4d0d-a3d2-5c3cf40b1f32 is in state STARTED 2025-02-19 08:52:15.987554 | orchestrator | 2025-02-19 08:52:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:15.987588 | orchestrator | 2025-02-19 08:52:15 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:15.988365 | orchestrator | 2025-02-19 08:52:15 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:15.989124 | orchestrator | 2025-02-19 08:52:15 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:15.992289 | orchestrator | 2025-02-19 08:52:15 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:15.993107 | orchestrator | 2025-02-19 08:52:15 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:15.993139 | orchestrator | 2025-02-19 08:52:15 | INFO  | Task 1669bfd4-951b-4d0d-a3d2-5c3cf40b1f32 is in state STARTED 2025-02-19 08:52:19.041783 | orchestrator | 2025-02-19 08:52:15 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:19.041927 | orchestrator | 2025-02-19 08:52:19 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:19.055174 | orchestrator | 2025-02-19 08:52:19 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:19.056778 | orchestrator | 2025-02-19 08:52:19 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:19.056805 | orchestrator | 2025-02-19 08:52:19 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:19.056820 | orchestrator | 2025-02-19 08:52:19 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:19.056839 | orchestrator | 2025-02-19 08:52:19 | INFO  | Task 1669bfd4-951b-4d0d-a3d2-5c3cf40b1f32 is in state STARTED 2025-02-19 08:52:22.091863 | orchestrator | 2025-02-19 08:52:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:22.091973 | orchestrator | 2025-02-19 08:52:22 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:22.092312 | orchestrator | 2025-02-19 08:52:22 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:22.092332 | orchestrator | 2025-02-19 08:52:22 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:22.092839 | orchestrator | 2025-02-19 08:52:22 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:22.093237 | orchestrator | 2025-02-19 08:52:22 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:22.093693 | orchestrator | 2025-02-19 08:52:22 | INFO  | Task 1669bfd4-951b-4d0d-a3d2-5c3cf40b1f32 is in state STARTED 2025-02-19 08:52:22.093733 | orchestrator | 2025-02-19 08:52:22 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:25.153882 | orchestrator | 2025-02-19 08:52:25 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:25.155446 | orchestrator | 2025-02-19 08:52:25 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:25.158381 | orchestrator | 2025-02-19 08:52:25 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:25.160722 | orchestrator | 2025-02-19 08:52:25 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:25.165355 | orchestrator | 2025-02-19 08:52:25 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:25.165809 | orchestrator | 2025-02-19 08:52:25 | INFO  | Task 1669bfd4-951b-4d0d-a3d2-5c3cf40b1f32 is in state STARTED 2025-02-19 08:52:28.214585 | orchestrator | 2025-02-19 08:52:25 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:28.214780 | orchestrator | 2025-02-19 08:52:28 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:28.216463 | orchestrator | 2025-02-19 08:52:28 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:28.224373 | orchestrator | 2025-02-19 08:52:28 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:28.225808 | orchestrator | 2025-02-19 08:52:28 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:28.227829 | orchestrator | 2025-02-19 08:52:28 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:28.232389 | orchestrator | 2025-02-19 08:52:28 | INFO  | Task 1669bfd4-951b-4d0d-a3d2-5c3cf40b1f32 is in state STARTED 2025-02-19 08:52:31.311563 | orchestrator | 2025-02-19 08:52:28 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:31.311725 | orchestrator | 2025-02-19 08:52:31 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:31.314801 | orchestrator | 2025-02-19 08:52:31 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:31.318757 | orchestrator | 2025-02-19 08:52:31 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:31.319703 | orchestrator | 2025-02-19 08:52:31 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:31.320861 | orchestrator | 2025-02-19 08:52:31 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:31.321916 | orchestrator | 2025-02-19 08:52:31 | INFO  | Task 1669bfd4-951b-4d0d-a3d2-5c3cf40b1f32 is in state STARTED 2025-02-19 08:52:34.389819 | orchestrator | 2025-02-19 08:52:31 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:34.389956 | orchestrator | 2025-02-19 08:52:34 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:34.390795 | orchestrator | 2025-02-19 08:52:34 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:52:34.391850 | orchestrator | 2025-02-19 08:52:34 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:34.393787 | orchestrator | 2025-02-19 08:52:34 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:34.395177 | orchestrator | 2025-02-19 08:52:34 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:34.397609 | orchestrator | 2025-02-19 08:52:34 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:34.399976 | orchestrator | 2025-02-19 08:52:34 | INFO  | Task 1669bfd4-951b-4d0d-a3d2-5c3cf40b1f32 is in state SUCCESS 2025-02-19 08:52:34.400308 | orchestrator | 2025-02-19 08:52:34 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:37.448037 | orchestrator | 2025-02-19 08:52:37 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:37.448445 | orchestrator | 2025-02-19 08:52:37 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:52:37.449230 | orchestrator | 2025-02-19 08:52:37 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:37.450202 | orchestrator | 2025-02-19 08:52:37 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:37.452520 | orchestrator | 2025-02-19 08:52:37 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:37.453231 | orchestrator | 2025-02-19 08:52:37 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:40.511849 | orchestrator | 2025-02-19 08:52:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:40.511983 | orchestrator | 2025-02-19 08:52:40 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:40.513203 | orchestrator | 2025-02-19 08:52:40 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:52:40.515012 | orchestrator | 2025-02-19 08:52:40 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:40.517294 | orchestrator | 2025-02-19 08:52:40 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:40.519801 | orchestrator | 2025-02-19 08:52:40 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:40.520761 | orchestrator | 2025-02-19 08:52:40 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:43.628566 | orchestrator | 2025-02-19 08:52:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:43.628720 | orchestrator | 2025-02-19 08:52:43 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:43.633067 | orchestrator | 2025-02-19 08:52:43 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:52:43.639230 | orchestrator | 2025-02-19 08:52:43 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:43.645973 | orchestrator | 2025-02-19 08:52:43 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:43.657003 | orchestrator | 2025-02-19 08:52:43 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:43.664158 | orchestrator | 2025-02-19 08:52:43 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:46.769622 | orchestrator | 2025-02-19 08:52:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:46.769784 | orchestrator | 2025-02-19 08:52:46 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:46.778097 | orchestrator | 2025-02-19 08:52:46 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:52:46.789511 | orchestrator | 2025-02-19 08:52:46 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:46.800348 | orchestrator | 2025-02-19 08:52:46 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:46.806348 | orchestrator | 2025-02-19 08:52:46 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:46.813933 | orchestrator | 2025-02-19 08:52:46 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:49.898172 | orchestrator | 2025-02-19 08:52:46 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:49.898363 | orchestrator | 2025-02-19 08:52:49 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:52.941220 | orchestrator | 2025-02-19 08:52:49 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:52:52.941341 | orchestrator | 2025-02-19 08:52:49 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:52.941360 | orchestrator | 2025-02-19 08:52:49 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:52.941375 | orchestrator | 2025-02-19 08:52:49 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:52.941419 | orchestrator | 2025-02-19 08:52:49 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state STARTED 2025-02-19 08:52:52.941435 | orchestrator | 2025-02-19 08:52:49 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:52.941470 | orchestrator | 2025-02-19 08:52:52 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:52.950704 | orchestrator | 2025-02-19 08:52:52 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:52:52.952124 | orchestrator | 2025-02-19 08:52:52 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:52.952998 | orchestrator | 2025-02-19 08:52:52 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:52.954145 | orchestrator | 2025-02-19 08:52:52 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:52.954853 | orchestrator | 2025-02-19 08:52:52 | INFO  | Task 58993c28-bc02-4b2e-8d82-2340e8faf47c is in state SUCCESS 2025-02-19 08:52:52.956083 | orchestrator | 2025-02-19 08:52:52.956119 | orchestrator | 2025-02-19 08:52:52.956133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 08:52:52.956149 | orchestrator | 2025-02-19 08:52:52.956163 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 08:52:52.956194 | orchestrator | Wednesday 19 February 2025 08:52:10 +0000 (0:00:00.863) 0:00:00.863 **** 2025-02-19 08:52:52.956209 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:52:52.956225 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:52:52.956239 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:52:52.956253 | orchestrator | 2025-02-19 08:52:52.956267 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 08:52:52.956281 | orchestrator | Wednesday 19 February 2025 08:52:11 +0000 (0:00:00.789) 0:00:01.652 **** 2025-02-19 08:52:52.956296 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-02-19 08:52:52.956310 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-02-19 08:52:52.956324 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-02-19 08:52:52.956338 | orchestrator | 2025-02-19 08:52:52.956352 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-02-19 08:52:52.956366 | orchestrator | 2025-02-19 08:52:52.956380 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-02-19 08:52:52.956393 | orchestrator | Wednesday 19 February 2025 08:52:12 +0000 (0:00:00.674) 0:00:02.327 **** 2025-02-19 08:52:52.956407 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:52:52.956422 | orchestrator | 2025-02-19 08:52:52.956436 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-02-19 08:52:52.956450 | orchestrator | Wednesday 19 February 2025 08:52:14 +0000 (0:00:02.534) 0:00:04.862 **** 2025-02-19 08:52:52.956463 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-02-19 08:52:52.956477 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-02-19 08:52:52.956491 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-02-19 08:52:52.956505 | orchestrator | 2025-02-19 08:52:52.956519 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-02-19 08:52:52.956533 | orchestrator | Wednesday 19 February 2025 08:52:16 +0000 (0:00:02.047) 0:00:06.909 **** 2025-02-19 08:52:52.956547 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-02-19 08:52:52.956561 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-02-19 08:52:52.956575 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-02-19 08:52:52.956589 | orchestrator | 2025-02-19 08:52:52.956603 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-02-19 08:52:52.956617 | orchestrator | Wednesday 19 February 2025 08:52:19 +0000 (0:00:02.740) 0:00:09.650 **** 2025-02-19 08:52:52.956682 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:52:52.956705 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:52.956721 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:52.956737 | orchestrator | 2025-02-19 08:52:52.956752 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-02-19 08:52:52.956768 | orchestrator | Wednesday 19 February 2025 08:52:22 +0000 (0:00:02.758) 0:00:12.408 **** 2025-02-19 08:52:52.956782 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:52:52.956796 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:52.956810 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:52.956824 | orchestrator | 2025-02-19 08:52:52.956838 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:52:52.956852 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:52:52.956867 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:52:52.956882 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:52:52.956896 | orchestrator | 2025-02-19 08:52:52.956910 | orchestrator | 2025-02-19 08:52:52.956923 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:52:52.956937 | orchestrator | Wednesday 19 February 2025 08:52:31 +0000 (0:00:08.968) 0:00:21.376 **** 2025-02-19 08:52:52.956951 | orchestrator | =============================================================================== 2025-02-19 08:52:52.956965 | orchestrator | memcached : Restart memcached container --------------------------------- 8.97s 2025-02-19 08:52:52.956979 | orchestrator | memcached : Check memcached container ----------------------------------- 2.76s 2025-02-19 08:52:52.956992 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.74s 2025-02-19 08:52:52.957006 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.53s 2025-02-19 08:52:52.957020 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.05s 2025-02-19 08:52:52.957034 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.79s 2025-02-19 08:52:52.957048 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2025-02-19 08:52:52.957061 | orchestrator | 2025-02-19 08:52:52.957075 | orchestrator | 2025-02-19 08:52:52.957089 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 08:52:52.957103 | orchestrator | 2025-02-19 08:52:52.957117 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 08:52:52.957131 | orchestrator | Wednesday 19 February 2025 08:52:09 +0000 (0:00:00.452) 0:00:00.452 **** 2025-02-19 08:52:52.957144 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:52:52.957159 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:52:52.957173 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:52:52.957187 | orchestrator | 2025-02-19 08:52:52.957201 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 08:52:52.957224 | orchestrator | Wednesday 19 February 2025 08:52:10 +0000 (0:00:00.911) 0:00:01.364 **** 2025-02-19 08:52:52.957239 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-02-19 08:52:52.957253 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-02-19 08:52:52.957272 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-02-19 08:52:52.957287 | orchestrator | 2025-02-19 08:52:52.957301 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-02-19 08:52:52.957314 | orchestrator | 2025-02-19 08:52:52.957329 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-02-19 08:52:52.957347 | orchestrator | Wednesday 19 February 2025 08:52:11 +0000 (0:00:00.758) 0:00:02.122 **** 2025-02-19 08:52:52.957362 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:52:52.957383 | orchestrator | 2025-02-19 08:52:52.957397 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-02-19 08:52:52.957412 | orchestrator | Wednesday 19 February 2025 08:52:13 +0000 (0:00:02.030) 0:00:04.152 **** 2025-02-19 08:52:52.957428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957536 | orchestrator | 2025-02-19 08:52:52.957550 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-02-19 08:52:52.957564 | orchestrator | Wednesday 19 February 2025 08:52:16 +0000 (0:00:03.280) 0:00:07.433 **** 2025-02-19 08:52:52.957578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957697 | orchestrator | 2025-02-19 08:52:52.957712 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-02-19 08:52:52.957726 | orchestrator | Wednesday 19 February 2025 08:52:20 +0000 (0:00:03.312) 0:00:10.745 **** 2025-02-19 08:52:52.957740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957832 | orchestrator | 2025-02-19 08:52:52.957852 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-02-19 08:52:52.957866 | orchestrator | Wednesday 19 February 2025 08:52:24 +0000 (0:00:04.066) 0:00:14.811 **** 2025-02-19 08:52:52.957881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-19 08:52:52.957974 | orchestrator | 2025-02-19 08:52:52.957988 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-19 08:52:52.958003 | orchestrator | Wednesday 19 February 2025 08:52:26 +0000 (0:00:02.770) 0:00:17.582 **** 2025-02-19 08:52:52.958068 | orchestrator | 2025-02-19 08:52:52.958097 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-19 08:52:52.958131 | orchestrator | Wednesday 19 February 2025 08:52:27 +0000 (0:00:00.216) 0:00:17.798 **** 2025-02-19 08:52:56.029070 | orchestrator | 2025-02-19 08:52:56.029200 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-19 08:52:56.029222 | orchestrator | Wednesday 19 February 2025 08:52:27 +0000 (0:00:00.213) 0:00:18.011 **** 2025-02-19 08:52:56.029237 | orchestrator | 2025-02-19 08:52:56.029252 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-02-19 08:52:56.029267 | orchestrator | Wednesday 19 February 2025 08:52:27 +0000 (0:00:00.263) 0:00:18.275 **** 2025-02-19 08:52:56.029281 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:52:56.029297 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:56.029312 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:56.029326 | orchestrator | 2025-02-19 08:52:56.029340 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-02-19 08:52:56.029355 | orchestrator | Wednesday 19 February 2025 08:52:37 +0000 (0:00:09.602) 0:00:27.878 **** 2025-02-19 08:52:56.029369 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:52:56.029401 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:52:56.029416 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:52:56.029430 | orchestrator | 2025-02-19 08:52:56.029444 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:52:56.029459 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:52:56.029475 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:52:56.029489 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:52:56.029503 | orchestrator | 2025-02-19 08:52:56.029518 | orchestrator | 2025-02-19 08:52:56.029532 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:52:56.029546 | orchestrator | Wednesday 19 February 2025 08:52:51 +0000 (0:00:14.335) 0:00:42.213 **** 2025-02-19 08:52:56.029560 | orchestrator | =============================================================================== 2025-02-19 08:52:56.029574 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 14.34s 2025-02-19 08:52:56.029588 | orchestrator | redis : Restart redis container ----------------------------------------- 9.60s 2025-02-19 08:52:56.029604 | orchestrator | redis : Copying over redis config files --------------------------------- 4.07s 2025-02-19 08:52:56.029621 | orchestrator | redis : Copying over default config.json files -------------------------- 3.31s 2025-02-19 08:52:56.029729 | orchestrator | redis : Ensuring config directories exist ------------------------------- 3.28s 2025-02-19 08:52:56.029747 | orchestrator | redis : Check redis containers ------------------------------------------ 2.77s 2025-02-19 08:52:56.029763 | orchestrator | redis : include_tasks --------------------------------------------------- 2.03s 2025-02-19 08:52:56.029779 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.91s 2025-02-19 08:52:56.029802 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2025-02-19 08:52:56.029818 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.69s 2025-02-19 08:52:56.029859 | orchestrator | 2025-02-19 08:52:52 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:56.029895 | orchestrator | 2025-02-19 08:52:56 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:56.030214 | orchestrator | 2025-02-19 08:52:56 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:52:56.036047 | orchestrator | 2025-02-19 08:52:56 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:56.038742 | orchestrator | 2025-02-19 08:52:56 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:56.040994 | orchestrator | 2025-02-19 08:52:56 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:52:59.095566 | orchestrator | 2025-02-19 08:52:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:52:59.095733 | orchestrator | 2025-02-19 08:52:59 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:52:59.095839 | orchestrator | 2025-02-19 08:52:59 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:52:59.096858 | orchestrator | 2025-02-19 08:52:59 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:52:59.097407 | orchestrator | 2025-02-19 08:52:59 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:52:59.102942 | orchestrator | 2025-02-19 08:52:59 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:02.139907 | orchestrator | 2025-02-19 08:52:59 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:02.140114 | orchestrator | 2025-02-19 08:53:02 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:02.140209 | orchestrator | 2025-02-19 08:53:02 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:02.141694 | orchestrator | 2025-02-19 08:53:02 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:02.143068 | orchestrator | 2025-02-19 08:53:02 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:02.145072 | orchestrator | 2025-02-19 08:53:02 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:05.196698 | orchestrator | 2025-02-19 08:53:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:05.196839 | orchestrator | 2025-02-19 08:53:05 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:05.197730 | orchestrator | 2025-02-19 08:53:05 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:05.198825 | orchestrator | 2025-02-19 08:53:05 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:05.199824 | orchestrator | 2025-02-19 08:53:05 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:05.201337 | orchestrator | 2025-02-19 08:53:05 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:08.261840 | orchestrator | 2025-02-19 08:53:05 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:08.261951 | orchestrator | 2025-02-19 08:53:08 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:08.266565 | orchestrator | 2025-02-19 08:53:08 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:08.268739 | orchestrator | 2025-02-19 08:53:08 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:08.274135 | orchestrator | 2025-02-19 08:53:08 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:08.275475 | orchestrator | 2025-02-19 08:53:08 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:11.337740 | orchestrator | 2025-02-19 08:53:08 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:11.337883 | orchestrator | 2025-02-19 08:53:11 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:11.337972 | orchestrator | 2025-02-19 08:53:11 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:11.338427 | orchestrator | 2025-02-19 08:53:11 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:11.339211 | orchestrator | 2025-02-19 08:53:11 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:11.340118 | orchestrator | 2025-02-19 08:53:11 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:11.340241 | orchestrator | 2025-02-19 08:53:11 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:14.392747 | orchestrator | 2025-02-19 08:53:14 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:14.396321 | orchestrator | 2025-02-19 08:53:14 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:14.401304 | orchestrator | 2025-02-19 08:53:14 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:14.403295 | orchestrator | 2025-02-19 08:53:14 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:14.404536 | orchestrator | 2025-02-19 08:53:14 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:17.446190 | orchestrator | 2025-02-19 08:53:14 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:17.446333 | orchestrator | 2025-02-19 08:53:17 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:17.446946 | orchestrator | 2025-02-19 08:53:17 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:17.448299 | orchestrator | 2025-02-19 08:53:17 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:17.449510 | orchestrator | 2025-02-19 08:53:17 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:17.450703 | orchestrator | 2025-02-19 08:53:17 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:20.493959 | orchestrator | 2025-02-19 08:53:17 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:20.494161 | orchestrator | 2025-02-19 08:53:20 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:20.496973 | orchestrator | 2025-02-19 08:53:20 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:20.502602 | orchestrator | 2025-02-19 08:53:20 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:20.502739 | orchestrator | 2025-02-19 08:53:20 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:20.506103 | orchestrator | 2025-02-19 08:53:20 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:23.557570 | orchestrator | 2025-02-19 08:53:20 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:23.557760 | orchestrator | 2025-02-19 08:53:23 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:23.563126 | orchestrator | 2025-02-19 08:53:23 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:23.563206 | orchestrator | 2025-02-19 08:53:23 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:26.601137 | orchestrator | 2025-02-19 08:53:23 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:26.601269 | orchestrator | 2025-02-19 08:53:23 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:26.601289 | orchestrator | 2025-02-19 08:53:23 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:26.601322 | orchestrator | 2025-02-19 08:53:26 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:26.602842 | orchestrator | 2025-02-19 08:53:26 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:26.603688 | orchestrator | 2025-02-19 08:53:26 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:26.603733 | orchestrator | 2025-02-19 08:53:26 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:26.604575 | orchestrator | 2025-02-19 08:53:26 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:26.604846 | orchestrator | 2025-02-19 08:53:26 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:29.655540 | orchestrator | 2025-02-19 08:53:29 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:32.703999 | orchestrator | 2025-02-19 08:53:29 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:32.704114 | orchestrator | 2025-02-19 08:53:29 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:32.704130 | orchestrator | 2025-02-19 08:53:29 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:32.704143 | orchestrator | 2025-02-19 08:53:29 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:32.704155 | orchestrator | 2025-02-19 08:53:29 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:32.704182 | orchestrator | 2025-02-19 08:53:32 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:32.704338 | orchestrator | 2025-02-19 08:53:32 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:32.704364 | orchestrator | 2025-02-19 08:53:32 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:32.707753 | orchestrator | 2025-02-19 08:53:32 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:32.708372 | orchestrator | 2025-02-19 08:53:32 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:35.753140 | orchestrator | 2025-02-19 08:53:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:35.753278 | orchestrator | 2025-02-19 08:53:35 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:35.756061 | orchestrator | 2025-02-19 08:53:35 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:35.756152 | orchestrator | 2025-02-19 08:53:35 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:35.756173 | orchestrator | 2025-02-19 08:53:35 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:35.756192 | orchestrator | 2025-02-19 08:53:35 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:38.809550 | orchestrator | 2025-02-19 08:53:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:38.809779 | orchestrator | 2025-02-19 08:53:38 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:38.812856 | orchestrator | 2025-02-19 08:53:38 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:38.812925 | orchestrator | 2025-02-19 08:53:38 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:38.813935 | orchestrator | 2025-02-19 08:53:38 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:38.814986 | orchestrator | 2025-02-19 08:53:38 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:41.853406 | orchestrator | 2025-02-19 08:53:38 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:41.853564 | orchestrator | 2025-02-19 08:53:41 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:41.853925 | orchestrator | 2025-02-19 08:53:41 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:41.855119 | orchestrator | 2025-02-19 08:53:41 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:41.857195 | orchestrator | 2025-02-19 08:53:41 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:44.895943 | orchestrator | 2025-02-19 08:53:41 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:44.896071 | orchestrator | 2025-02-19 08:53:41 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:44.896164 | orchestrator | 2025-02-19 08:53:44 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:44.896249 | orchestrator | 2025-02-19 08:53:44 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:44.897052 | orchestrator | 2025-02-19 08:53:44 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:44.897838 | orchestrator | 2025-02-19 08:53:44 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:44.898294 | orchestrator | 2025-02-19 08:53:44 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state STARTED 2025-02-19 08:53:47.932391 | orchestrator | 2025-02-19 08:53:44 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:47.932568 | orchestrator | 2025-02-19 08:53:47 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:47.933190 | orchestrator | 2025-02-19 08:53:47 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:47.933233 | orchestrator | 2025-02-19 08:53:47 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:47.933968 | orchestrator | 2025-02-19 08:53:47 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:47.934919 | orchestrator | 2025-02-19 08:53:47 | INFO  | Task 6058f03e-fb9b-4e12-b24d-187b0406275a is in state SUCCESS 2025-02-19 08:53:47.937051 | orchestrator | 2025-02-19 08:53:47.937098 | orchestrator | 2025-02-19 08:53:47.937113 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 08:53:47.937129 | orchestrator | 2025-02-19 08:53:47.937143 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 08:53:47.937158 | orchestrator | Wednesday 19 February 2025 08:52:10 +0000 (0:00:00.626) 0:00:00.626 **** 2025-02-19 08:53:47.937172 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:53:47.937188 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:53:47.937202 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:53:47.937216 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:53:47.937230 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:53:47.937267 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:53:47.937283 | orchestrator | 2025-02-19 08:53:47.937297 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 08:53:47.937312 | orchestrator | Wednesday 19 February 2025 08:52:12 +0000 (0:00:02.163) 0:00:02.790 **** 2025-02-19 08:53:47.937326 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-19 08:53:47.937340 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-19 08:53:47.937355 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-19 08:53:47.937369 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-19 08:53:47.937383 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-19 08:53:47.937397 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-19 08:53:47.937411 | orchestrator | 2025-02-19 08:53:47.937425 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-02-19 08:53:47.937439 | orchestrator | 2025-02-19 08:53:47.937453 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-02-19 08:53:47.937467 | orchestrator | Wednesday 19 February 2025 08:52:15 +0000 (0:00:03.058) 0:00:05.849 **** 2025-02-19 08:53:47.937482 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:53:47.937497 | orchestrator | 2025-02-19 08:53:47.937511 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-19 08:53:47.937525 | orchestrator | Wednesday 19 February 2025 08:52:18 +0000 (0:00:03.244) 0:00:09.093 **** 2025-02-19 08:53:47.937540 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-02-19 08:53:47.937554 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-02-19 08:53:47.937568 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-02-19 08:53:47.937582 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-02-19 08:53:47.937596 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-02-19 08:53:47.937611 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-02-19 08:53:47.937626 | orchestrator | 2025-02-19 08:53:47.937641 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-19 08:53:47.937705 | orchestrator | Wednesday 19 February 2025 08:52:20 +0000 (0:00:01.704) 0:00:10.798 **** 2025-02-19 08:53:47.937722 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-02-19 08:53:47.937737 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-02-19 08:53:47.937754 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-02-19 08:53:47.937769 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-02-19 08:53:47.937785 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-02-19 08:53:47.937801 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-02-19 08:53:47.937817 | orchestrator | 2025-02-19 08:53:47.937833 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-19 08:53:47.937849 | orchestrator | Wednesday 19 February 2025 08:52:22 +0000 (0:00:02.739) 0:00:13.537 **** 2025-02-19 08:53:47.937864 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-02-19 08:53:47.937880 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:53:47.937897 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-02-19 08:53:47.937912 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:53:47.937928 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-02-19 08:53:47.937944 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:53:47.937960 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-02-19 08:53:47.937975 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:53:47.937990 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-02-19 08:53:47.938075 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:53:47.938106 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-02-19 08:53:47.938128 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:53:47.938152 | orchestrator | 2025-02-19 08:53:47.938175 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-02-19 08:53:47.938197 | orchestrator | Wednesday 19 February 2025 08:52:25 +0000 (0:00:02.857) 0:00:16.394 **** 2025-02-19 08:53:47.938221 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:53:47.938245 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:53:47.938270 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:53:47.938292 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:53:47.938317 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:53:47.938341 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:53:47.938368 | orchestrator | 2025-02-19 08:53:47.938394 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-02-19 08:53:47.938420 | orchestrator | Wednesday 19 February 2025 08:52:26 +0000 (0:00:00.712) 0:00:17.107 **** 2025-02-19 08:53:47.938461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938598 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938689 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938727 | orchestrator | 2025-02-19 08:53:47.938741 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-02-19 08:53:47.938755 | orchestrator | Wednesday 19 February 2025 08:52:29 +0000 (0:00:02.844) 0:00:19.951 **** 2025-02-19 08:53:47.938770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938840 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938965 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.938979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939017 | orchestrator | 2025-02-19 08:53:47.939031 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-02-19 08:53:47.939046 | orchestrator | Wednesday 19 February 2025 08:52:34 +0000 (0:00:04.734) 0:00:24.686 **** 2025-02-19 08:53:47.939060 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:53:47.939074 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:53:47.939088 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:53:47.939102 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:53:47.939115 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:53:47.939129 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:53:47.939143 | orchestrator | 2025-02-19 08:53:47.939157 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-02-19 08:53:47.939171 | orchestrator | Wednesday 19 February 2025 08:52:38 +0000 (0:00:04.789) 0:00:29.475 **** 2025-02-19 08:53:47.939184 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:53:47.939198 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:53:47.939212 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:53:47.939226 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:53:47.939239 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:53:47.939253 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:53:47.939267 | orchestrator | 2025-02-19 08:53:47.939286 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-02-19 08:53:47.939300 | orchestrator | Wednesday 19 February 2025 08:52:45 +0000 (0:00:06.584) 0:00:36.060 **** 2025-02-19 08:53:47.939321 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:53:47.939335 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:53:47.939349 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:53:47.939363 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:53:47.939376 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:53:47.939390 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:53:47.939403 | orchestrator | 2025-02-19 08:53:47.939417 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-02-19 08:53:47.939431 | orchestrator | Wednesday 19 February 2025 08:52:49 +0000 (0:00:03.564) 0:00:39.624 **** 2025-02-19 08:53:47.939445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939507 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-19 08:53:47.939742 | orchestrator | 2025-02-19 08:53:47.939756 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-19 08:53:47.939771 | orchestrator | Wednesday 19 February 2025 08:52:51 +0000 (0:00:02.791) 0:00:42.416 **** 2025-02-19 08:53:47.939785 | orchestrator | 2025-02-19 08:53:47.939798 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-19 08:53:47.939811 | orchestrator | Wednesday 19 February 2025 08:52:52 +0000 (0:00:00.777) 0:00:43.193 **** 2025-02-19 08:53:47.939824 | orchestrator | 2025-02-19 08:53:47.939836 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-19 08:53:47.939849 | orchestrator | Wednesday 19 February 2025 08:52:52 +0000 (0:00:00.342) 0:00:43.536 **** 2025-02-19 08:53:47.939861 | orchestrator | 2025-02-19 08:53:47.939873 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-19 08:53:47.939886 | orchestrator | Wednesday 19 February 2025 08:52:53 +0000 (0:00:00.625) 0:00:44.161 **** 2025-02-19 08:53:47.939898 | orchestrator | 2025-02-19 08:53:47.939910 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-19 08:53:47.939923 | orchestrator | Wednesday 19 February 2025 08:52:54 +0000 (0:00:00.558) 0:00:44.720 **** 2025-02-19 08:53:47.939935 | orchestrator | 2025-02-19 08:53:47.939947 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-19 08:53:47.939960 | orchestrator | Wednesday 19 February 2025 08:52:56 +0000 (0:00:01.869) 0:00:46.590 **** 2025-02-19 08:53:47.939972 | orchestrator | 2025-02-19 08:53:47.939984 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-02-19 08:53:47.939996 | orchestrator | Wednesday 19 February 2025 08:52:56 +0000 (0:00:00.451) 0:00:47.041 **** 2025-02-19 08:53:47.940009 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:53:47.940021 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:53:47.940034 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:53:47.940046 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:53:47.940058 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:53:47.940070 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:53:47.940082 | orchestrator | 2025-02-19 08:53:47.940095 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-02-19 08:53:47.940108 | orchestrator | Wednesday 19 February 2025 08:53:10 +0000 (0:00:13.534) 0:01:00.575 **** 2025-02-19 08:53:47.940120 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:53:47.940133 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:53:47.940145 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:53:47.940157 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:53:47.940169 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:53:47.940182 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:53:47.940194 | orchestrator | 2025-02-19 08:53:47.940207 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-02-19 08:53:47.940225 | orchestrator | Wednesday 19 February 2025 08:53:12 +0000 (0:00:02.340) 0:01:02.916 **** 2025-02-19 08:53:47.940237 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:53:47.940251 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:53:47.940271 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:53:47.940285 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:53:47.940299 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:53:47.940311 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:53:47.940324 | orchestrator | 2025-02-19 08:53:47.940343 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-02-19 08:53:47.940356 | orchestrator | Wednesday 19 February 2025 08:53:25 +0000 (0:00:12.760) 0:01:15.677 **** 2025-02-19 08:53:47.940369 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-02-19 08:53:47.940381 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-02-19 08:53:47.940394 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-02-19 08:53:47.940407 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-02-19 08:53:47.940419 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-02-19 08:53:47.940431 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-02-19 08:53:47.940444 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-02-19 08:53:47.940456 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-02-19 08:53:47.940468 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-02-19 08:53:47.940481 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-02-19 08:53:47.940493 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-02-19 08:53:47.940505 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-02-19 08:53:47.940518 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-19 08:53:47.940530 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-19 08:53:47.940542 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-19 08:53:47.940555 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-19 08:53:47.940567 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-19 08:53:47.940579 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-19 08:53:47.940592 | orchestrator | 2025-02-19 08:53:47.940604 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-02-19 08:53:47.940617 | orchestrator | Wednesday 19 February 2025 08:53:32 +0000 (0:00:07.045) 0:01:22.723 **** 2025-02-19 08:53:47.940630 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-02-19 08:53:47.940642 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:53:47.940671 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-02-19 08:53:47.940684 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:53:47.940696 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-02-19 08:53:47.940719 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:53:47.940732 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-02-19 08:53:47.940744 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-02-19 08:53:47.940757 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-02-19 08:53:47.940769 | orchestrator | 2025-02-19 08:53:47.940781 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-02-19 08:53:47.940794 | orchestrator | Wednesday 19 February 2025 08:53:34 +0000 (0:00:02.409) 0:01:25.132 **** 2025-02-19 08:53:47.940806 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-02-19 08:53:47.940819 | orchestrator | skipping: [testbed-node-3] 2025-02-19 08:53:47.940832 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-02-19 08:53:47.940844 | orchestrator | skipping: [testbed-node-4] 2025-02-19 08:53:47.940857 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-02-19 08:53:47.940869 | orchestrator | skipping: [testbed-node-5] 2025-02-19 08:53:47.940882 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-02-19 08:53:47.940895 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-02-19 08:53:47.940907 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-02-19 08:53:47.940919 | orchestrator | 2025-02-19 08:53:47.940932 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-02-19 08:53:47.940944 | orchestrator | Wednesday 19 February 2025 08:53:39 +0000 (0:00:04.999) 0:01:30.132 **** 2025-02-19 08:53:47.940956 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:53:47.940969 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:53:47.940982 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:53:47.940994 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:53:47.941006 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:53:47.941018 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:53:47.941031 | orchestrator | 2025-02-19 08:53:47.941043 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:53:47.941061 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-19 08:53:50.975926 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-19 08:53:50.976081 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-19 08:53:50.976102 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 08:53:50.976117 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 08:53:50.976149 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 08:53:50.976164 | orchestrator | 2025-02-19 08:53:50.976179 | orchestrator | 2025-02-19 08:53:50.976194 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:53:50.976214 | orchestrator | Wednesday 19 February 2025 08:53:47 +0000 (0:00:07.709) 0:01:37.842 **** 2025-02-19 08:53:50.976229 | orchestrator | =============================================================================== 2025-02-19 08:53:50.976243 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.47s 2025-02-19 08:53:50.976257 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 13.53s 2025-02-19 08:53:50.976271 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.05s 2025-02-19 08:53:50.976285 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 6.59s 2025-02-19 08:53:50.976320 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.00s 2025-02-19 08:53:50.976335 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 4.79s 2025-02-19 08:53:50.976349 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.74s 2025-02-19 08:53:50.976363 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 4.63s 2025-02-19 08:53:50.976377 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 3.56s 2025-02-19 08:53:50.976450 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.24s 2025-02-19 08:53:50.976467 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.06s 2025-02-19 08:53:50.976484 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.86s 2025-02-19 08:53:50.976518 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.84s 2025-02-19 08:53:50.976535 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.79s 2025-02-19 08:53:50.976550 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.74s 2025-02-19 08:53:50.976566 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.41s 2025-02-19 08:53:50.976582 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.34s 2025-02-19 08:53:50.976597 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.16s 2025-02-19 08:53:50.976614 | orchestrator | module-load : Load modules ---------------------------------------------- 1.70s 2025-02-19 08:53:50.976629 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.71s 2025-02-19 08:53:50.976646 | orchestrator | 2025-02-19 08:53:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:50.976709 | orchestrator | 2025-02-19 08:53:50 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:50.976807 | orchestrator | 2025-02-19 08:53:50 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:50.979519 | orchestrator | 2025-02-19 08:53:50 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:50.980814 | orchestrator | 2025-02-19 08:53:50 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:53:50.982469 | orchestrator | 2025-02-19 08:53:50 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:54.045730 | orchestrator | 2025-02-19 08:53:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:54.045858 | orchestrator | 2025-02-19 08:53:54 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:54.046199 | orchestrator | 2025-02-19 08:53:54 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:54.046925 | orchestrator | 2025-02-19 08:53:54 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:54.046980 | orchestrator | 2025-02-19 08:53:54 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:53:54.050906 | orchestrator | 2025-02-19 08:53:54 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:57.102407 | orchestrator | 2025-02-19 08:53:54 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:53:57.102529 | orchestrator | 2025-02-19 08:53:57 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:53:57.103940 | orchestrator | 2025-02-19 08:53:57 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:53:57.104332 | orchestrator | 2025-02-19 08:53:57 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:53:57.105758 | orchestrator | 2025-02-19 08:53:57 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:53:57.106362 | orchestrator | 2025-02-19 08:53:57 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:53:57.106728 | orchestrator | 2025-02-19 08:53:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:00.152986 | orchestrator | 2025-02-19 08:54:00 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:00.158322 | orchestrator | 2025-02-19 08:54:00 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:00.158854 | orchestrator | 2025-02-19 08:54:00 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:00.158893 | orchestrator | 2025-02-19 08:54:00 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:00.159413 | orchestrator | 2025-02-19 08:54:00 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:03.220091 | orchestrator | 2025-02-19 08:54:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:03.220250 | orchestrator | 2025-02-19 08:54:03 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:03.221013 | orchestrator | 2025-02-19 08:54:03 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:03.223740 | orchestrator | 2025-02-19 08:54:03 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:03.225144 | orchestrator | 2025-02-19 08:54:03 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:03.225620 | orchestrator | 2025-02-19 08:54:03 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:06.275342 | orchestrator | 2025-02-19 08:54:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:06.275481 | orchestrator | 2025-02-19 08:54:06 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:06.275915 | orchestrator | 2025-02-19 08:54:06 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:06.279331 | orchestrator | 2025-02-19 08:54:06 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:06.280093 | orchestrator | 2025-02-19 08:54:06 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:06.282870 | orchestrator | 2025-02-19 08:54:06 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:09.324833 | orchestrator | 2025-02-19 08:54:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:09.324986 | orchestrator | 2025-02-19 08:54:09 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:09.325500 | orchestrator | 2025-02-19 08:54:09 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:09.326267 | orchestrator | 2025-02-19 08:54:09 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:09.327266 | orchestrator | 2025-02-19 08:54:09 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:09.328916 | orchestrator | 2025-02-19 08:54:09 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:12.368623 | orchestrator | 2025-02-19 08:54:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:12.368817 | orchestrator | 2025-02-19 08:54:12 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:12.372217 | orchestrator | 2025-02-19 08:54:12 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:12.373914 | orchestrator | 2025-02-19 08:54:12 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:15.431818 | orchestrator | 2025-02-19 08:54:12 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:15.431948 | orchestrator | 2025-02-19 08:54:12 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:15.431968 | orchestrator | 2025-02-19 08:54:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:15.432002 | orchestrator | 2025-02-19 08:54:15 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:15.433340 | orchestrator | 2025-02-19 08:54:15 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:15.434857 | orchestrator | 2025-02-19 08:54:15 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:15.436433 | orchestrator | 2025-02-19 08:54:15 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:15.437867 | orchestrator | 2025-02-19 08:54:15 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:15.438007 | orchestrator | 2025-02-19 08:54:15 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:18.490433 | orchestrator | 2025-02-19 08:54:18 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:18.491146 | orchestrator | 2025-02-19 08:54:18 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:18.493954 | orchestrator | 2025-02-19 08:54:18 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:18.497635 | orchestrator | 2025-02-19 08:54:18 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:18.498495 | orchestrator | 2025-02-19 08:54:18 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:21.548773 | orchestrator | 2025-02-19 08:54:18 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:21.548917 | orchestrator | 2025-02-19 08:54:21 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:21.549222 | orchestrator | 2025-02-19 08:54:21 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:21.550366 | orchestrator | 2025-02-19 08:54:21 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:21.551125 | orchestrator | 2025-02-19 08:54:21 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:21.553745 | orchestrator | 2025-02-19 08:54:21 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:24.598204 | orchestrator | 2025-02-19 08:54:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:24.598346 | orchestrator | 2025-02-19 08:54:24 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:24.598861 | orchestrator | 2025-02-19 08:54:24 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:24.600252 | orchestrator | 2025-02-19 08:54:24 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:24.601952 | orchestrator | 2025-02-19 08:54:24 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:24.605181 | orchestrator | 2025-02-19 08:54:24 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:27.652038 | orchestrator | 2025-02-19 08:54:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:27.652185 | orchestrator | 2025-02-19 08:54:27 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:27.656335 | orchestrator | 2025-02-19 08:54:27 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:27.656394 | orchestrator | 2025-02-19 08:54:27 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:27.656810 | orchestrator | 2025-02-19 08:54:27 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:27.657432 | orchestrator | 2025-02-19 08:54:27 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:30.719542 | orchestrator | 2025-02-19 08:54:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:30.719689 | orchestrator | 2025-02-19 08:54:30 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:30.719910 | orchestrator | 2025-02-19 08:54:30 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:30.721308 | orchestrator | 2025-02-19 08:54:30 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:30.722440 | orchestrator | 2025-02-19 08:54:30 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:30.723833 | orchestrator | 2025-02-19 08:54:30 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:33.769177 | orchestrator | 2025-02-19 08:54:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:33.769353 | orchestrator | 2025-02-19 08:54:33 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:33.769486 | orchestrator | 2025-02-19 08:54:33 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:33.772315 | orchestrator | 2025-02-19 08:54:33 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:33.777283 | orchestrator | 2025-02-19 08:54:33 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:33.783095 | orchestrator | 2025-02-19 08:54:33 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:36.827889 | orchestrator | 2025-02-19 08:54:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:36.828010 | orchestrator | 2025-02-19 08:54:36 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:36.829737 | orchestrator | 2025-02-19 08:54:36 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:36.830763 | orchestrator | 2025-02-19 08:54:36 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:36.831788 | orchestrator | 2025-02-19 08:54:36 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:36.832344 | orchestrator | 2025-02-19 08:54:36 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:39.898853 | orchestrator | 2025-02-19 08:54:36 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:39.898982 | orchestrator | 2025-02-19 08:54:39 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:39.902262 | orchestrator | 2025-02-19 08:54:39 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:39.904266 | orchestrator | 2025-02-19 08:54:39 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:39.907909 | orchestrator | 2025-02-19 08:54:39 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:39.911223 | orchestrator | 2025-02-19 08:54:39 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:42.987259 | orchestrator | 2025-02-19 08:54:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:42.987373 | orchestrator | 2025-02-19 08:54:42 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:42.990439 | orchestrator | 2025-02-19 08:54:42 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:42.991169 | orchestrator | 2025-02-19 08:54:42 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:42.997309 | orchestrator | 2025-02-19 08:54:42 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:42.998201 | orchestrator | 2025-02-19 08:54:42 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:46.062840 | orchestrator | 2025-02-19 08:54:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:46.062988 | orchestrator | 2025-02-19 08:54:46 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:46.063373 | orchestrator | 2025-02-19 08:54:46 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:46.064439 | orchestrator | 2025-02-19 08:54:46 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:46.065322 | orchestrator | 2025-02-19 08:54:46 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:46.069092 | orchestrator | 2025-02-19 08:54:46 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:46.069583 | orchestrator | 2025-02-19 08:54:46 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:49.122702 | orchestrator | 2025-02-19 08:54:49 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:49.123858 | orchestrator | 2025-02-19 08:54:49 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:49.125063 | orchestrator | 2025-02-19 08:54:49 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:49.125108 | orchestrator | 2025-02-19 08:54:49 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:49.126186 | orchestrator | 2025-02-19 08:54:49 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:49.126301 | orchestrator | 2025-02-19 08:54:49 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:52.184599 | orchestrator | 2025-02-19 08:54:52 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:52.185168 | orchestrator | 2025-02-19 08:54:52 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:52.185976 | orchestrator | 2025-02-19 08:54:52 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:52.187626 | orchestrator | 2025-02-19 08:54:52 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:52.189313 | orchestrator | 2025-02-19 08:54:52 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:52.189584 | orchestrator | 2025-02-19 08:54:52 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:55.226933 | orchestrator | 2025-02-19 08:54:55 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:55.227584 | orchestrator | 2025-02-19 08:54:55 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:55.228385 | orchestrator | 2025-02-19 08:54:55 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:55.229393 | orchestrator | 2025-02-19 08:54:55 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:55.230608 | orchestrator | 2025-02-19 08:54:55 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:54:58.277061 | orchestrator | 2025-02-19 08:54:55 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:54:58.277206 | orchestrator | 2025-02-19 08:54:58 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:54:58.277880 | orchestrator | 2025-02-19 08:54:58 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:54:58.281007 | orchestrator | 2025-02-19 08:54:58 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:54:58.282125 | orchestrator | 2025-02-19 08:54:58 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:54:58.285986 | orchestrator | 2025-02-19 08:54:58 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:01.340885 | orchestrator | 2025-02-19 08:54:58 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:01.341051 | orchestrator | 2025-02-19 08:55:01 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:01.342599 | orchestrator | 2025-02-19 08:55:01 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:55:01.345063 | orchestrator | 2025-02-19 08:55:01 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:01.345826 | orchestrator | 2025-02-19 08:55:01 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:01.348212 | orchestrator | 2025-02-19 08:55:01 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:04.413032 | orchestrator | 2025-02-19 08:55:01 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:04.413173 | orchestrator | 2025-02-19 08:55:04 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:04.413574 | orchestrator | 2025-02-19 08:55:04 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:55:04.414695 | orchestrator | 2025-02-19 08:55:04 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:04.416135 | orchestrator | 2025-02-19 08:55:04 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:04.418096 | orchestrator | 2025-02-19 08:55:04 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:07.485025 | orchestrator | 2025-02-19 08:55:04 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:07.485202 | orchestrator | 2025-02-19 08:55:07 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:07.485646 | orchestrator | 2025-02-19 08:55:07 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:55:07.486647 | orchestrator | 2025-02-19 08:55:07 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:07.492105 | orchestrator | 2025-02-19 08:55:07 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:07.496724 | orchestrator | 2025-02-19 08:55:07 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:10.564193 | orchestrator | 2025-02-19 08:55:07 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:10.564325 | orchestrator | 2025-02-19 08:55:10 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:10.565857 | orchestrator | 2025-02-19 08:55:10 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:55:10.571197 | orchestrator | 2025-02-19 08:55:10 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:10.572012 | orchestrator | 2025-02-19 08:55:10 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:10.573053 | orchestrator | 2025-02-19 08:55:10 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:13.617382 | orchestrator | 2025-02-19 08:55:10 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:13.617498 | orchestrator | 2025-02-19 08:55:13 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:13.618459 | orchestrator | 2025-02-19 08:55:13 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state STARTED 2025-02-19 08:55:13.618485 | orchestrator | 2025-02-19 08:55:13 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:13.620276 | orchestrator | 2025-02-19 08:55:13 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:13.620298 | orchestrator | 2025-02-19 08:55:13 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:16.653254 | orchestrator | 2025-02-19 08:55:13 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:16.653398 | orchestrator | 2025-02-19 08:55:16 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:16.654579 | orchestrator | 2025-02-19 08:55:16 | INFO  | Task ebbde9fb-38b2-4b50-aa03-4009fc98a1cd is in state SUCCESS 2025-02-19 08:55:16.654645 | orchestrator | 2025-02-19 08:55:16.654661 | orchestrator | 2025-02-19 08:55:16.654676 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-02-19 08:55:16.654709 | orchestrator | 2025-02-19 08:55:16.654723 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-02-19 08:55:16.654899 | orchestrator | Wednesday 19 February 2025 08:52:45 +0000 (0:00:01.306) 0:00:01.306 **** 2025-02-19 08:55:16.654921 | orchestrator | ok: [localhost] => { 2025-02-19 08:55:16.654937 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-02-19 08:55:16.654952 | orchestrator | } 2025-02-19 08:55:16.654966 | orchestrator | 2025-02-19 08:55:16.654980 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-02-19 08:55:16.654994 | orchestrator | Wednesday 19 February 2025 08:52:45 +0000 (0:00:00.283) 0:00:01.589 **** 2025-02-19 08:55:16.655010 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-02-19 08:55:16.655025 | orchestrator | ...ignoring 2025-02-19 08:55:16.655039 | orchestrator | 2025-02-19 08:55:16.655053 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-02-19 08:55:16.655067 | orchestrator | Wednesday 19 February 2025 08:52:50 +0000 (0:00:04.883) 0:00:06.472 **** 2025-02-19 08:55:16.655081 | orchestrator | skipping: [localhost] 2025-02-19 08:55:16.655095 | orchestrator | 2025-02-19 08:55:16.655109 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-02-19 08:55:16.655123 | orchestrator | Wednesday 19 February 2025 08:52:50 +0000 (0:00:00.086) 0:00:06.558 **** 2025-02-19 08:55:16.655137 | orchestrator | ok: [localhost] 2025-02-19 08:55:16.655151 | orchestrator | 2025-02-19 08:55:16.655165 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 08:55:16.655180 | orchestrator | 2025-02-19 08:55:16.655203 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 08:55:16.655242 | orchestrator | Wednesday 19 February 2025 08:52:50 +0000 (0:00:00.219) 0:00:06.778 **** 2025-02-19 08:55:16.655258 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:55:16.655274 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:55:16.655290 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:55:16.655305 | orchestrator | 2025-02-19 08:55:16.655321 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 08:55:16.655337 | orchestrator | Wednesday 19 February 2025 08:52:51 +0000 (0:00:01.143) 0:00:07.921 **** 2025-02-19 08:55:16.655353 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-02-19 08:55:16.655369 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-02-19 08:55:16.655385 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-02-19 08:55:16.655401 | orchestrator | 2025-02-19 08:55:16.655416 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-02-19 08:55:16.655483 | orchestrator | 2025-02-19 08:55:16.655499 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-19 08:55:16.655516 | orchestrator | Wednesday 19 February 2025 08:52:52 +0000 (0:00:01.110) 0:00:09.032 **** 2025-02-19 08:55:16.655532 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:55:16.655547 | orchestrator | 2025-02-19 08:55:16.655560 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-02-19 08:55:16.655574 | orchestrator | Wednesday 19 February 2025 08:52:56 +0000 (0:00:03.539) 0:00:12.571 **** 2025-02-19 08:55:16.655588 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:55:16.655602 | orchestrator | 2025-02-19 08:55:16.655615 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-02-19 08:55:16.655629 | orchestrator | Wednesday 19 February 2025 08:52:59 +0000 (0:00:02.696) 0:00:15.268 **** 2025-02-19 08:55:16.655643 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:55:16.655682 | orchestrator | 2025-02-19 08:55:16.655697 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-02-19 08:55:16.655711 | orchestrator | Wednesday 19 February 2025 08:53:00 +0000 (0:00:01.068) 0:00:16.337 **** 2025-02-19 08:55:16.655724 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:55:16.655738 | orchestrator | 2025-02-19 08:55:16.655773 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-02-19 08:55:16.655787 | orchestrator | Wednesday 19 February 2025 08:53:01 +0000 (0:00:01.119) 0:00:17.456 **** 2025-02-19 08:55:16.655801 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:55:16.655814 | orchestrator | 2025-02-19 08:55:16.655828 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-02-19 08:55:16.655842 | orchestrator | Wednesday 19 February 2025 08:53:01 +0000 (0:00:00.395) 0:00:17.852 **** 2025-02-19 08:55:16.655856 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:55:16.655869 | orchestrator | 2025-02-19 08:55:16.655883 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-19 08:55:16.655897 | orchestrator | Wednesday 19 February 2025 08:53:02 +0000 (0:00:00.473) 0:00:18.326 **** 2025-02-19 08:55:16.655910 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:55:16.655924 | orchestrator | 2025-02-19 08:55:16.655938 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-02-19 08:55:16.655952 | orchestrator | Wednesday 19 February 2025 08:53:03 +0000 (0:00:01.153) 0:00:19.479 **** 2025-02-19 08:55:16.655965 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:55:16.655979 | orchestrator | 2025-02-19 08:55:16.655993 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-02-19 08:55:16.656007 | orchestrator | Wednesday 19 February 2025 08:53:04 +0000 (0:00:01.301) 0:00:20.781 **** 2025-02-19 08:55:16.656021 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:55:16.656035 | orchestrator | 2025-02-19 08:55:16.656049 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-02-19 08:55:16.656073 | orchestrator | Wednesday 19 February 2025 08:53:04 +0000 (0:00:00.375) 0:00:21.156 **** 2025-02-19 08:55:16.656088 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:55:16.656102 | orchestrator | 2025-02-19 08:55:16.656127 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-02-19 08:55:16.656141 | orchestrator | Wednesday 19 February 2025 08:53:05 +0000 (0:00:00.368) 0:00:21.525 **** 2025-02-19 08:55:16.656158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 08:55:16.656177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 08:55:16.656192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 08:55:16.656207 | orchestrator | 2025-02-19 08:55:16.656221 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-02-19 08:55:16.656234 | orchestrator | Wednesday 19 February 2025 08:53:06 +0000 (0:00:01.052) 0:00:22.577 **** 2025-02-19 08:55:16.656259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 08:55:16.656287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 08:55:16.656303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 08:55:16.656317 | orchestrator | 2025-02-19 08:55:16.656332 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-02-19 08:55:16.656346 | orchestrator | Wednesday 19 February 2025 08:53:08 +0000 (0:00:02.488) 0:00:25.066 **** 2025-02-19 08:55:16.656360 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-19 08:55:16.656374 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-19 08:55:16.656388 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-19 08:55:16.656402 | orchestrator | 2025-02-19 08:55:16.656422 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-02-19 08:55:16.656436 | orchestrator | Wednesday 19 February 2025 08:53:10 +0000 (0:00:02.127) 0:00:27.193 **** 2025-02-19 08:55:16.656450 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-19 08:55:16.656473 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-19 08:55:16.656487 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-19 08:55:16.656501 | orchestrator | 2025-02-19 08:55:16.656514 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-02-19 08:55:16.656528 | orchestrator | Wednesday 19 February 2025 08:53:15 +0000 (0:00:04.885) 0:00:32.078 **** 2025-02-19 08:55:16.656542 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-19 08:55:16.656556 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-19 08:55:16.656569 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-19 08:55:16.656583 | orchestrator | 2025-02-19 08:55:16.656597 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-02-19 08:55:16.656611 | orchestrator | Wednesday 19 February 2025 08:53:17 +0000 (0:00:02.094) 0:00:34.173 **** 2025-02-19 08:55:16.656631 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-19 08:55:16.656645 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-19 08:55:16.656660 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-19 08:55:16.656673 | orchestrator | 2025-02-19 08:55:16.656687 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-02-19 08:55:16.656701 | orchestrator | Wednesday 19 February 2025 08:53:20 +0000 (0:00:02.446) 0:00:36.620 **** 2025-02-19 08:55:16.656715 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-19 08:55:16.656729 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-19 08:55:16.656758 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-19 08:55:16.656773 | orchestrator | 2025-02-19 08:55:16.656787 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-02-19 08:55:16.656801 | orchestrator | Wednesday 19 February 2025 08:53:21 +0000 (0:00:01.525) 0:00:38.145 **** 2025-02-19 08:55:16.656815 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-19 08:55:16.656829 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-19 08:55:16.656843 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-19 08:55:16.656857 | orchestrator | 2025-02-19 08:55:16.656871 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-19 08:55:16.656885 | orchestrator | Wednesday 19 February 2025 08:53:23 +0000 (0:00:01.667) 0:00:39.813 **** 2025-02-19 08:55:16.656899 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:55:16.656918 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:55:16.656941 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:55:16.656955 | orchestrator | 2025-02-19 08:55:16.656969 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-02-19 08:55:16.656983 | orchestrator | Wednesday 19 February 2025 08:53:24 +0000 (0:00:00.699) 0:00:40.512 **** 2025-02-19 08:55:16.656997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 08:55:16.657020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 08:55:16.657051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 08:55:16.657066 | orchestrator | 2025-02-19 08:55:16.657080 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-02-19 08:55:16.657094 | orchestrator | Wednesday 19 February 2025 08:53:26 +0000 (0:00:01.937) 0:00:42.450 **** 2025-02-19 08:55:16.657108 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:55:16.657121 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:55:16.657135 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:55:16.657149 | orchestrator | 2025-02-19 08:55:16.657162 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-02-19 08:55:16.657176 | orchestrator | Wednesday 19 February 2025 08:53:27 +0000 (0:00:00.945) 0:00:43.396 **** 2025-02-19 08:55:16.657190 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:55:16.657203 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:55:16.657217 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:55:16.657230 | orchestrator | 2025-02-19 08:55:16.657244 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-02-19 08:55:16.657257 | orchestrator | Wednesday 19 February 2025 08:53:33 +0000 (0:00:06.704) 0:00:50.101 **** 2025-02-19 08:55:16.657271 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:55:16.657285 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:55:16.657299 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:55:16.657313 | orchestrator | 2025-02-19 08:55:16.657326 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-19 08:55:16.657348 | orchestrator | 2025-02-19 08:55:16.657362 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-19 08:55:16.657375 | orchestrator | Wednesday 19 February 2025 08:53:34 +0000 (0:00:00.506) 0:00:50.607 **** 2025-02-19 08:55:16.657389 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:55:16.657403 | orchestrator | 2025-02-19 08:55:16.657417 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-19 08:55:16.657430 | orchestrator | Wednesday 19 February 2025 08:53:35 +0000 (0:00:00.673) 0:00:51.280 **** 2025-02-19 08:55:16.657444 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:55:16.657457 | orchestrator | 2025-02-19 08:55:16.657471 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-19 08:55:16.657485 | orchestrator | Wednesday 19 February 2025 08:53:35 +0000 (0:00:00.512) 0:00:51.793 **** 2025-02-19 08:55:16.657499 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:55:16.657513 | orchestrator | 2025-02-19 08:55:16.657527 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-19 08:55:16.657540 | orchestrator | Wednesday 19 February 2025 08:53:38 +0000 (0:00:03.161) 0:00:54.954 **** 2025-02-19 08:55:16.657554 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:55:16.657568 | orchestrator | 2025-02-19 08:55:16.657586 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-19 08:55:16.657600 | orchestrator | 2025-02-19 08:55:16.657614 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-19 08:55:16.657628 | orchestrator | Wednesday 19 February 2025 08:54:33 +0000 (0:00:54.640) 0:01:49.595 **** 2025-02-19 08:55:16.657641 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:55:16.657655 | orchestrator | 2025-02-19 08:55:16.657669 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-19 08:55:16.657683 | orchestrator | Wednesday 19 February 2025 08:54:34 +0000 (0:00:01.100) 0:01:50.695 **** 2025-02-19 08:55:16.657696 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:55:16.657715 | orchestrator | 2025-02-19 08:55:16.657739 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-19 08:55:16.657785 | orchestrator | Wednesday 19 February 2025 08:54:34 +0000 (0:00:00.275) 0:01:50.971 **** 2025-02-19 08:55:16.657808 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:55:16.657830 | orchestrator | 2025-02-19 08:55:16.657851 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-19 08:55:16.657873 | orchestrator | Wednesday 19 February 2025 08:54:42 +0000 (0:00:07.672) 0:01:58.643 **** 2025-02-19 08:55:16.657895 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:55:16.657917 | orchestrator | 2025-02-19 08:55:16.657932 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-19 08:55:16.657946 | orchestrator | 2025-02-19 08:55:16.657960 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-19 08:55:16.657973 | orchestrator | Wednesday 19 February 2025 08:54:51 +0000 (0:00:09.206) 0:02:07.850 **** 2025-02-19 08:55:16.657987 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:55:16.658008 | orchestrator | 2025-02-19 08:55:16.658080 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-19 08:55:16.658105 | orchestrator | Wednesday 19 February 2025 08:54:52 +0000 (0:00:00.908) 0:02:08.758 **** 2025-02-19 08:55:16.658127 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:55:16.658152 | orchestrator | 2025-02-19 08:55:16.658170 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-19 08:55:16.658184 | orchestrator | Wednesday 19 February 2025 08:54:52 +0000 (0:00:00.168) 0:02:08.926 **** 2025-02-19 08:55:16.658198 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:55:16.658212 | orchestrator | 2025-02-19 08:55:16.658227 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-19 08:55:16.658250 | orchestrator | Wednesday 19 February 2025 08:54:59 +0000 (0:00:07.048) 0:02:15.975 **** 2025-02-19 08:55:16.658383 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:55:16.658514 | orchestrator | 2025-02-19 08:55:16.658533 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-02-19 08:55:16.658544 | orchestrator | 2025-02-19 08:55:16.658552 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-02-19 08:55:16.658561 | orchestrator | Wednesday 19 February 2025 08:55:10 +0000 (0:00:11.092) 0:02:27.067 **** 2025-02-19 08:55:16.658570 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:55:16.658578 | orchestrator | 2025-02-19 08:55:16.658587 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-02-19 08:55:16.658595 | orchestrator | Wednesday 19 February 2025 08:55:11 +0000 (0:00:01.146) 0:02:28.214 **** 2025-02-19 08:55:16.658603 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-19 08:55:16.658612 | orchestrator | enable_outward_rabbitmq_True 2025-02-19 08:55:16.658621 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-19 08:55:16.658629 | orchestrator | outward_rabbitmq_restart 2025-02-19 08:55:16.658638 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:55:16.658648 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:55:16.658656 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:55:16.658665 | orchestrator | 2025-02-19 08:55:16.658674 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-02-19 08:55:16.658682 | orchestrator | skipping: no hosts matched 2025-02-19 08:55:16.658691 | orchestrator | 2025-02-19 08:55:16.658700 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-02-19 08:55:16.658709 | orchestrator | skipping: no hosts matched 2025-02-19 08:55:16.658717 | orchestrator | 2025-02-19 08:55:16.658726 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-02-19 08:55:16.658734 | orchestrator | skipping: no hosts matched 2025-02-19 08:55:16.658776 | orchestrator | 2025-02-19 08:55:16.658787 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:55:16.658796 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-02-19 08:55:16.658806 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-19 08:55:16.658815 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:55:16.658825 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-19 08:55:16.658834 | orchestrator | 2025-02-19 08:55:16.658843 | orchestrator | 2025-02-19 08:55:16.658851 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:55:16.658860 | orchestrator | Wednesday 19 February 2025 08:55:14 +0000 (0:00:02.300) 0:02:30.514 **** 2025-02-19 08:55:16.658869 | orchestrator | =============================================================================== 2025-02-19 08:55:16.658877 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 74.94s 2025-02-19 08:55:16.658898 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 17.88s 2025-02-19 08:55:16.658907 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.71s 2025-02-19 08:55:16.658915 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.89s 2025-02-19 08:55:16.658924 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.88s 2025-02-19 08:55:16.658932 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 3.54s 2025-02-19 08:55:16.658942 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.70s 2025-02-19 08:55:16.658952 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.68s 2025-02-19 08:55:16.658961 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.49s 2025-02-19 08:55:16.658978 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.45s 2025-02-19 08:55:16.658989 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.30s 2025-02-19 08:55:16.658998 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.13s 2025-02-19 08:55:16.659008 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.09s 2025-02-19 08:55:16.659018 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.94s 2025-02-19 08:55:16.659027 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.67s 2025-02-19 08:55:16.659037 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.53s 2025-02-19 08:55:16.659047 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.30s 2025-02-19 08:55:16.659056 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.15s 2025-02-19 08:55:16.659066 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.15s 2025-02-19 08:55:16.659076 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.14s 2025-02-19 08:55:16.659086 | orchestrator | 2025-02-19 08:55:16 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:16.659107 | orchestrator | 2025-02-19 08:55:16 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:16.661240 | orchestrator | 2025-02-19 08:55:16 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:19.719459 | orchestrator | 2025-02-19 08:55:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:19.719601 | orchestrator | 2025-02-19 08:55:19 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:19.721196 | orchestrator | 2025-02-19 08:55:19 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:19.721370 | orchestrator | 2025-02-19 08:55:19 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:19.721394 | orchestrator | 2025-02-19 08:55:19 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:22.774479 | orchestrator | 2025-02-19 08:55:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:22.774656 | orchestrator | 2025-02-19 08:55:22 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:22.775324 | orchestrator | 2025-02-19 08:55:22 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:22.775376 | orchestrator | 2025-02-19 08:55:22 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:22.778898 | orchestrator | 2025-02-19 08:55:22 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:25.825379 | orchestrator | 2025-02-19 08:55:22 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:25.825528 | orchestrator | 2025-02-19 08:55:25 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:25.826569 | orchestrator | 2025-02-19 08:55:25 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:25.827383 | orchestrator | 2025-02-19 08:55:25 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:25.827413 | orchestrator | 2025-02-19 08:55:25 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:28.893135 | orchestrator | 2025-02-19 08:55:25 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:28.893277 | orchestrator | 2025-02-19 08:55:28 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:28.895387 | orchestrator | 2025-02-19 08:55:28 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:28.901051 | orchestrator | 2025-02-19 08:55:28 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:28.908475 | orchestrator | 2025-02-19 08:55:28 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:31.978976 | orchestrator | 2025-02-19 08:55:28 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:31.979157 | orchestrator | 2025-02-19 08:55:31 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:31.990743 | orchestrator | 2025-02-19 08:55:31 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:32.000242 | orchestrator | 2025-02-19 08:55:31 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:32.019830 | orchestrator | 2025-02-19 08:55:32 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:35.097254 | orchestrator | 2025-02-19 08:55:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:35.097368 | orchestrator | 2025-02-19 08:55:35 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:35.098555 | orchestrator | 2025-02-19 08:55:35 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:35.100556 | orchestrator | 2025-02-19 08:55:35 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:35.103661 | orchestrator | 2025-02-19 08:55:35 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:38.174209 | orchestrator | 2025-02-19 08:55:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:38.174391 | orchestrator | 2025-02-19 08:55:38 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:38.177992 | orchestrator | 2025-02-19 08:55:38 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:38.190156 | orchestrator | 2025-02-19 08:55:38 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:41.239602 | orchestrator | 2025-02-19 08:55:38 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:41.239726 | orchestrator | 2025-02-19 08:55:38 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:41.239764 | orchestrator | 2025-02-19 08:55:41 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:41.244393 | orchestrator | 2025-02-19 08:55:41 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:41.246687 | orchestrator | 2025-02-19 08:55:41 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:41.250153 | orchestrator | 2025-02-19 08:55:41 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:41.251331 | orchestrator | 2025-02-19 08:55:41 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:44.300402 | orchestrator | 2025-02-19 08:55:44 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:44.302321 | orchestrator | 2025-02-19 08:55:44 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:44.302350 | orchestrator | 2025-02-19 08:55:44 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:44.302560 | orchestrator | 2025-02-19 08:55:44 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:44.303495 | orchestrator | 2025-02-19 08:55:44 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:47.351577 | orchestrator | 2025-02-19 08:55:47 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:47.352183 | orchestrator | 2025-02-19 08:55:47 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:47.352262 | orchestrator | 2025-02-19 08:55:47 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:47.353313 | orchestrator | 2025-02-19 08:55:47 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:50.417623 | orchestrator | 2025-02-19 08:55:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:50.417768 | orchestrator | 2025-02-19 08:55:50 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:50.418565 | orchestrator | 2025-02-19 08:55:50 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:50.419997 | orchestrator | 2025-02-19 08:55:50 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:50.421280 | orchestrator | 2025-02-19 08:55:50 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:53.483065 | orchestrator | 2025-02-19 08:55:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:53.483208 | orchestrator | 2025-02-19 08:55:53 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:53.488348 | orchestrator | 2025-02-19 08:55:53 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:53.490123 | orchestrator | 2025-02-19 08:55:53 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:53.491227 | orchestrator | 2025-02-19 08:55:53 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:56.558592 | orchestrator | 2025-02-19 08:55:53 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:56.558744 | orchestrator | 2025-02-19 08:55:56 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:56.558997 | orchestrator | 2025-02-19 08:55:56 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:56.568382 | orchestrator | 2025-02-19 08:55:56 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:59.633452 | orchestrator | 2025-02-19 08:55:56 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:55:59.633624 | orchestrator | 2025-02-19 08:55:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:55:59.633668 | orchestrator | 2025-02-19 08:55:59 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:55:59.634637 | orchestrator | 2025-02-19 08:55:59 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:55:59.635070 | orchestrator | 2025-02-19 08:55:59 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:55:59.638216 | orchestrator | 2025-02-19 08:55:59 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:02.693162 | orchestrator | 2025-02-19 08:55:59 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:02.693257 | orchestrator | 2025-02-19 08:56:02 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:02.696981 | orchestrator | 2025-02-19 08:56:02 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:02.700467 | orchestrator | 2025-02-19 08:56:02 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:02.710198 | orchestrator | 2025-02-19 08:56:02 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:05.788186 | orchestrator | 2025-02-19 08:56:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:05.788331 | orchestrator | 2025-02-19 08:56:05 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:05.790521 | orchestrator | 2025-02-19 08:56:05 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:05.792141 | orchestrator | 2025-02-19 08:56:05 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:05.793274 | orchestrator | 2025-02-19 08:56:05 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:05.793378 | orchestrator | 2025-02-19 08:56:05 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:08.835374 | orchestrator | 2025-02-19 08:56:08 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:08.836372 | orchestrator | 2025-02-19 08:56:08 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:08.837663 | orchestrator | 2025-02-19 08:56:08 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:08.838717 | orchestrator | 2025-02-19 08:56:08 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:08.839860 | orchestrator | 2025-02-19 08:56:08 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:11.875993 | orchestrator | 2025-02-19 08:56:11 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:11.876337 | orchestrator | 2025-02-19 08:56:11 | INFO  | Task c2dee221-363d-4eed-ae50-5802c7f81418 is in state STARTED 2025-02-19 08:56:11.877531 | orchestrator | 2025-02-19 08:56:11 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:11.879569 | orchestrator | 2025-02-19 08:56:11 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:11.881295 | orchestrator | 2025-02-19 08:56:11 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:14.949339 | orchestrator | 2025-02-19 08:56:11 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:14.949494 | orchestrator | 2025-02-19 08:56:14 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:14.950666 | orchestrator | 2025-02-19 08:56:14 | INFO  | Task c2dee221-363d-4eed-ae50-5802c7f81418 is in state STARTED 2025-02-19 08:56:14.950797 | orchestrator | 2025-02-19 08:56:14 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:14.951470 | orchestrator | 2025-02-19 08:56:14 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:14.952496 | orchestrator | 2025-02-19 08:56:14 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:18.000133 | orchestrator | 2025-02-19 08:56:14 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:18.000391 | orchestrator | 2025-02-19 08:56:17 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:18.000509 | orchestrator | 2025-02-19 08:56:17 | INFO  | Task c2dee221-363d-4eed-ae50-5802c7f81418 is in state STARTED 2025-02-19 08:56:18.004741 | orchestrator | 2025-02-19 08:56:18 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:18.018097 | orchestrator | 2025-02-19 08:56:18 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:18.026327 | orchestrator | 2025-02-19 08:56:18 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:21.095153 | orchestrator | 2025-02-19 08:56:18 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:21.095292 | orchestrator | 2025-02-19 08:56:21 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:21.101465 | orchestrator | 2025-02-19 08:56:21 | INFO  | Task c2dee221-363d-4eed-ae50-5802c7f81418 is in state STARTED 2025-02-19 08:56:24.182441 | orchestrator | 2025-02-19 08:56:21 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:24.182565 | orchestrator | 2025-02-19 08:56:21 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:24.182584 | orchestrator | 2025-02-19 08:56:21 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:24.182600 | orchestrator | 2025-02-19 08:56:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:24.182633 | orchestrator | 2025-02-19 08:56:24 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:24.184728 | orchestrator | 2025-02-19 08:56:24 | INFO  | Task c2dee221-363d-4eed-ae50-5802c7f81418 is in state STARTED 2025-02-19 08:56:24.186231 | orchestrator | 2025-02-19 08:56:24 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:24.192507 | orchestrator | 2025-02-19 08:56:24 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:24.201185 | orchestrator | 2025-02-19 08:56:24 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:27.263408 | orchestrator | 2025-02-19 08:56:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:27.263574 | orchestrator | 2025-02-19 08:56:27 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:27.264271 | orchestrator | 2025-02-19 08:56:27 | INFO  | Task c2dee221-363d-4eed-ae50-5802c7f81418 is in state STARTED 2025-02-19 08:56:27.265086 | orchestrator | 2025-02-19 08:56:27 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:27.265126 | orchestrator | 2025-02-19 08:56:27 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:27.267046 | orchestrator | 2025-02-19 08:56:27 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:30.302774 | orchestrator | 2025-02-19 08:56:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:30.302982 | orchestrator | 2025-02-19 08:56:30 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:30.304011 | orchestrator | 2025-02-19 08:56:30 | INFO  | Task c2dee221-363d-4eed-ae50-5802c7f81418 is in state SUCCESS 2025-02-19 08:56:30.304051 | orchestrator | 2025-02-19 08:56:30 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:30.304885 | orchestrator | 2025-02-19 08:56:30 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:30.305499 | orchestrator | 2025-02-19 08:56:30 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:30.305596 | orchestrator | 2025-02-19 08:56:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:33.335597 | orchestrator | 2025-02-19 08:56:33 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:33.338582 | orchestrator | 2025-02-19 08:56:33 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:33.339646 | orchestrator | 2025-02-19 08:56:33 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:33.342228 | orchestrator | 2025-02-19 08:56:33 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:36.378610 | orchestrator | 2025-02-19 08:56:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:36.378757 | orchestrator | 2025-02-19 08:56:36 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:36.382566 | orchestrator | 2025-02-19 08:56:36 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:36.386959 | orchestrator | 2025-02-19 08:56:36 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:36.387507 | orchestrator | 2025-02-19 08:56:36 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:39.426478 | orchestrator | 2025-02-19 08:56:36 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:39.426633 | orchestrator | 2025-02-19 08:56:39 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:39.427483 | orchestrator | 2025-02-19 08:56:39 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:39.428111 | orchestrator | 2025-02-19 08:56:39 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:39.429138 | orchestrator | 2025-02-19 08:56:39 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:39.429218 | orchestrator | 2025-02-19 08:56:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:42.462406 | orchestrator | 2025-02-19 08:56:42 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:42.463156 | orchestrator | 2025-02-19 08:56:42 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:42.463186 | orchestrator | 2025-02-19 08:56:42 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:42.463725 | orchestrator | 2025-02-19 08:56:42 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:42.463869 | orchestrator | 2025-02-19 08:56:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:45.506324 | orchestrator | 2025-02-19 08:56:45 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:45.507576 | orchestrator | 2025-02-19 08:56:45 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:45.507660 | orchestrator | 2025-02-19 08:56:45 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:45.508285 | orchestrator | 2025-02-19 08:56:45 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:48.561585 | orchestrator | 2025-02-19 08:56:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:48.561697 | orchestrator | 2025-02-19 08:56:48 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:48.563098 | orchestrator | 2025-02-19 08:56:48 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:48.563151 | orchestrator | 2025-02-19 08:56:48 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:48.567970 | orchestrator | 2025-02-19 08:56:48 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:51.607529 | orchestrator | 2025-02-19 08:56:48 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:51.607657 | orchestrator | 2025-02-19 08:56:51 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:51.608527 | orchestrator | 2025-02-19 08:56:51 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:51.609820 | orchestrator | 2025-02-19 08:56:51 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:51.611032 | orchestrator | 2025-02-19 08:56:51 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:54.659505 | orchestrator | 2025-02-19 08:56:51 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:54.659646 | orchestrator | 2025-02-19 08:56:54 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:54.662007 | orchestrator | 2025-02-19 08:56:54 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:54.665072 | orchestrator | 2025-02-19 08:56:54 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:54.666396 | orchestrator | 2025-02-19 08:56:54 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:56:57.711391 | orchestrator | 2025-02-19 08:56:54 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:56:57.711541 | orchestrator | 2025-02-19 08:56:57 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:56:57.712048 | orchestrator | 2025-02-19 08:56:57 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:56:57.712085 | orchestrator | 2025-02-19 08:56:57 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state STARTED 2025-02-19 08:56:57.713037 | orchestrator | 2025-02-19 08:56:57 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:00.758663 | orchestrator | 2025-02-19 08:56:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:00.758780 | orchestrator | 2025-02-19 08:57:00 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:00.758875 | orchestrator | 2025-02-19 08:57:00 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:00.759947 | orchestrator | 2025-02-19 08:57:00.762255 | orchestrator | None 2025-02-19 08:57:00.762357 | orchestrator | 2025-02-19 08:57:00 | INFO  | Task 95655575-9d06-489e-8e59-97834d794861 is in state SUCCESS 2025-02-19 08:57:00.762396 | orchestrator | 2025-02-19 08:57:00.762414 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 08:57:00.762429 | orchestrator | 2025-02-19 08:57:00.762444 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 08:57:00.762458 | orchestrator | Wednesday 19 February 2025 08:53:51 +0000 (0:00:00.471) 0:00:00.471 **** 2025-02-19 08:57:00.762473 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.762488 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.762503 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.762517 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:57:00.762530 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:57:00.762550 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:57:00.762573 | orchestrator | 2025-02-19 08:57:00.762599 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 08:57:00.762624 | orchestrator | Wednesday 19 February 2025 08:53:52 +0000 (0:00:00.931) 0:00:01.402 **** 2025-02-19 08:57:00.762639 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-02-19 08:57:00.762653 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-02-19 08:57:00.762667 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-02-19 08:57:00.762681 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-02-19 08:57:00.762695 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-02-19 08:57:00.762709 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-02-19 08:57:00.762754 | orchestrator | 2025-02-19 08:57:00.762769 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-02-19 08:57:00.762783 | orchestrator | 2025-02-19 08:57:00.762797 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-02-19 08:57:00.762811 | orchestrator | Wednesday 19 February 2025 08:53:55 +0000 (0:00:02.731) 0:00:04.134 **** 2025-02-19 08:57:00.762826 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 08:57:00.762878 | orchestrator | 2025-02-19 08:57:00.762894 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-02-19 08:57:00.762908 | orchestrator | Wednesday 19 February 2025 08:53:56 +0000 (0:00:01.835) 0:00:05.969 **** 2025-02-19 08:57:00.762924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.762953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.762968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.762982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.762996 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763010 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763024 | orchestrator | 2025-02-19 08:57:00.763051 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-02-19 08:57:00.763067 | orchestrator | Wednesday 19 February 2025 08:53:58 +0000 (0:00:01.689) 0:00:07.659 **** 2025-02-19 08:57:00.763081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763171 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763201 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763215 | orchestrator | 2025-02-19 08:57:00.763230 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-02-19 08:57:00.763245 | orchestrator | Wednesday 19 February 2025 08:54:01 +0000 (0:00:02.738) 0:00:10.397 **** 2025-02-19 08:57:00.763259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763373 | orchestrator | 2025-02-19 08:57:00.763388 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-02-19 08:57:00.763402 | orchestrator | Wednesday 19 February 2025 08:54:02 +0000 (0:00:01.378) 0:00:11.775 **** 2025-02-19 08:57:00.763416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763488 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763508 | orchestrator | 2025-02-19 08:57:00.763529 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-02-19 08:57:00.763543 | orchestrator | Wednesday 19 February 2025 08:54:05 +0000 (0:00:02.680) 0:00:14.456 **** 2025-02-19 08:57:00.763562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.763683 | orchestrator | 2025-02-19 08:57:00.763697 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-02-19 08:57:00.763712 | orchestrator | Wednesday 19 February 2025 08:54:08 +0000 (0:00:02.943) 0:00:17.400 **** 2025-02-19 08:57:00.763726 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:57:00.763741 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:57:00.763755 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:57:00.763769 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:57:00.763783 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:57:00.763797 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:57:00.763811 | orchestrator | 2025-02-19 08:57:00.763825 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-02-19 08:57:00.763870 | orchestrator | Wednesday 19 February 2025 08:54:11 +0000 (0:00:03.556) 0:00:20.956 **** 2025-02-19 08:57:00.763894 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-02-19 08:57:00.763909 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-02-19 08:57:00.763923 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-02-19 08:57:00.763938 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-02-19 08:57:00.763952 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-02-19 08:57:00.763966 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-02-19 08:57:00.763980 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-19 08:57:00.763994 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-19 08:57:00.764015 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-19 08:57:00.764030 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-19 08:57:00.764044 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-19 08:57:00.764058 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-19 08:57:00.764072 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-19 08:57:00.764088 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-19 08:57:00.764102 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-19 08:57:00.764116 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-19 08:57:00.764130 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-19 08:57:00.764144 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-19 08:57:00.764158 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-19 08:57:00.764178 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-19 08:57:00.764192 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-19 08:57:00.764206 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-19 08:57:00.764220 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-19 08:57:00.764233 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-19 08:57:00.764247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-19 08:57:00.764261 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-19 08:57:00.764275 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-19 08:57:00.764289 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-19 08:57:00.764302 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-19 08:57:00.764323 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-19 08:57:00.764337 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-19 08:57:00.764351 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-19 08:57:00.764365 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-19 08:57:00.764379 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-19 08:57:00.764393 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-19 08:57:00.764406 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-19 08:57:00.764420 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-19 08:57:00.764434 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-19 08:57:00.764447 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-19 08:57:00.764462 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-19 08:57:00.764475 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-19 08:57:00.764489 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-19 08:57:00.764503 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-02-19 08:57:00.764518 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-02-19 08:57:00.764537 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-02-19 08:57:00.764552 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-02-19 08:57:00.764566 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-02-19 08:57:00.764580 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-02-19 08:57:00.764594 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-19 08:57:00.764608 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-19 08:57:00.764630 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-19 08:57:00.764655 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-19 08:57:00.764681 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-19 08:57:00.764699 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-19 08:57:00.764713 | orchestrator | 2025-02-19 08:57:00.764727 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-19 08:57:00.764742 | orchestrator | Wednesday 19 February 2025 08:54:34 +0000 (0:00:22.653) 0:00:43.609 **** 2025-02-19 08:57:00.764756 | orchestrator | 2025-02-19 08:57:00.764770 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-19 08:57:00.764882 | orchestrator | Wednesday 19 February 2025 08:54:34 +0000 (0:00:00.107) 0:00:43.717 **** 2025-02-19 08:57:00.764903 | orchestrator | 2025-02-19 08:57:00.764918 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-19 08:57:00.764932 | orchestrator | Wednesday 19 February 2025 08:54:34 +0000 (0:00:00.092) 0:00:43.809 **** 2025-02-19 08:57:00.764946 | orchestrator | 2025-02-19 08:57:00.764960 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-19 08:57:00.764974 | orchestrator | Wednesday 19 February 2025 08:54:34 +0000 (0:00:00.234) 0:00:44.044 **** 2025-02-19 08:57:00.764988 | orchestrator | 2025-02-19 08:57:00.765002 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-19 08:57:00.765015 | orchestrator | Wednesday 19 February 2025 08:54:35 +0000 (0:00:00.058) 0:00:44.103 **** 2025-02-19 08:57:00.765029 | orchestrator | 2025-02-19 08:57:00.765049 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-19 08:57:00.765063 | orchestrator | Wednesday 19 February 2025 08:54:35 +0000 (0:00:00.057) 0:00:44.160 **** 2025-02-19 08:57:00.765076 | orchestrator | 2025-02-19 08:57:00.765091 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-02-19 08:57:00.765104 | orchestrator | Wednesday 19 February 2025 08:54:35 +0000 (0:00:00.060) 0:00:44.221 **** 2025-02-19 08:57:00.765118 | orchestrator | ok: [testbed-node-4] 2025-02-19 08:57:00.765132 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.765147 | orchestrator | ok: [testbed-node-3] 2025-02-19 08:57:00.765161 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.765175 | orchestrator | ok: [testbed-node-5] 2025-02-19 08:57:00.765189 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.765203 | orchestrator | 2025-02-19 08:57:00.765216 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-02-19 08:57:00.765230 | orchestrator | Wednesday 19 February 2025 08:54:37 +0000 (0:00:02.625) 0:00:46.847 **** 2025-02-19 08:57:00.765244 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:57:00.765259 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:57:00.765272 | orchestrator | changed: [testbed-node-3] 2025-02-19 08:57:00.765286 | orchestrator | changed: [testbed-node-5] 2025-02-19 08:57:00.765300 | orchestrator | changed: [testbed-node-4] 2025-02-19 08:57:00.765314 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:57:00.765328 | orchestrator | 2025-02-19 08:57:00.765341 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-02-19 08:57:00.765355 | orchestrator | 2025-02-19 08:57:00.765369 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-19 08:57:00.765383 | orchestrator | Wednesday 19 February 2025 08:55:00 +0000 (0:00:23.088) 0:01:09.935 **** 2025-02-19 08:57:00.765397 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:57:00.765411 | orchestrator | 2025-02-19 08:57:00.765425 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-19 08:57:00.765439 | orchestrator | Wednesday 19 February 2025 08:55:03 +0000 (0:00:02.561) 0:01:12.496 **** 2025-02-19 08:57:00.765453 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:57:00.765467 | orchestrator | 2025-02-19 08:57:00.765481 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-02-19 08:57:00.765495 | orchestrator | Wednesday 19 February 2025 08:55:05 +0000 (0:00:02.457) 0:01:14.953 **** 2025-02-19 08:57:00.765509 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.765523 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.765536 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.765550 | orchestrator | 2025-02-19 08:57:00.765564 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-02-19 08:57:00.765578 | orchestrator | Wednesday 19 February 2025 08:55:08 +0000 (0:00:03.083) 0:01:18.037 **** 2025-02-19 08:57:00.765592 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.765606 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.765627 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.765641 | orchestrator | 2025-02-19 08:57:00.765672 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-02-19 08:57:00.765695 | orchestrator | Wednesday 19 February 2025 08:55:10 +0000 (0:00:01.289) 0:01:19.327 **** 2025-02-19 08:57:00.765716 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.765730 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.765744 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.765758 | orchestrator | 2025-02-19 08:57:00.765771 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-02-19 08:57:00.765785 | orchestrator | Wednesday 19 February 2025 08:55:11 +0000 (0:00:01.024) 0:01:20.352 **** 2025-02-19 08:57:00.765799 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.765813 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.765826 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.765904 | orchestrator | 2025-02-19 08:57:00.765921 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-02-19 08:57:00.765935 | orchestrator | Wednesday 19 February 2025 08:55:11 +0000 (0:00:00.574) 0:01:20.926 **** 2025-02-19 08:57:00.765949 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.765963 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.765976 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.765990 | orchestrator | 2025-02-19 08:57:00.766004 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-02-19 08:57:00.766071 | orchestrator | Wednesday 19 February 2025 08:55:12 +0000 (0:00:00.878) 0:01:21.804 **** 2025-02-19 08:57:00.766089 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766110 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.766124 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.766138 | orchestrator | 2025-02-19 08:57:00.766152 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-02-19 08:57:00.766166 | orchestrator | Wednesday 19 February 2025 08:55:13 +0000 (0:00:00.686) 0:01:22.491 **** 2025-02-19 08:57:00.766180 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766194 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.766208 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.766222 | orchestrator | 2025-02-19 08:57:00.766236 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-02-19 08:57:00.766250 | orchestrator | Wednesday 19 February 2025 08:55:13 +0000 (0:00:00.508) 0:01:22.999 **** 2025-02-19 08:57:00.766263 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766277 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.766291 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.766305 | orchestrator | 2025-02-19 08:57:00.766319 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-02-19 08:57:00.766333 | orchestrator | Wednesday 19 February 2025 08:55:14 +0000 (0:00:00.439) 0:01:23.439 **** 2025-02-19 08:57:00.766347 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766361 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.766375 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.766389 | orchestrator | 2025-02-19 08:57:00.766403 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-02-19 08:57:00.766416 | orchestrator | Wednesday 19 February 2025 08:55:14 +0000 (0:00:00.435) 0:01:23.874 **** 2025-02-19 08:57:00.766429 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766441 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.766454 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.766466 | orchestrator | 2025-02-19 08:57:00.766478 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-02-19 08:57:00.766495 | orchestrator | Wednesday 19 February 2025 08:55:15 +0000 (0:00:00.521) 0:01:24.396 **** 2025-02-19 08:57:00.766508 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766520 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.766532 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.766552 | orchestrator | 2025-02-19 08:57:00.766565 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-02-19 08:57:00.766578 | orchestrator | Wednesday 19 February 2025 08:55:15 +0000 (0:00:00.334) 0:01:24.730 **** 2025-02-19 08:57:00.766590 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766602 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.766615 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.766627 | orchestrator | 2025-02-19 08:57:00.766639 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-02-19 08:57:00.766652 | orchestrator | Wednesday 19 February 2025 08:55:16 +0000 (0:00:00.858) 0:01:25.588 **** 2025-02-19 08:57:00.766664 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766676 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.766689 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.766701 | orchestrator | 2025-02-19 08:57:00.766714 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-02-19 08:57:00.766726 | orchestrator | Wednesday 19 February 2025 08:55:17 +0000 (0:00:00.821) 0:01:26.410 **** 2025-02-19 08:57:00.766738 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766751 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.766763 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.766775 | orchestrator | 2025-02-19 08:57:00.766788 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-02-19 08:57:00.766800 | orchestrator | Wednesday 19 February 2025 08:55:17 +0000 (0:00:00.410) 0:01:26.821 **** 2025-02-19 08:57:00.766812 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766825 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.766855 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.766868 | orchestrator | 2025-02-19 08:57:00.766880 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-02-19 08:57:00.766893 | orchestrator | Wednesday 19 February 2025 08:55:19 +0000 (0:00:01.489) 0:01:28.311 **** 2025-02-19 08:57:00.766905 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766917 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.766929 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.766941 | orchestrator | 2025-02-19 08:57:00.766954 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-02-19 08:57:00.766966 | orchestrator | Wednesday 19 February 2025 08:55:20 +0000 (0:00:01.057) 0:01:29.368 **** 2025-02-19 08:57:00.766978 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.766991 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.767011 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.767024 | orchestrator | 2025-02-19 08:57:00.767036 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-19 08:57:00.767048 | orchestrator | Wednesday 19 February 2025 08:55:20 +0000 (0:00:00.586) 0:01:29.955 **** 2025-02-19 08:57:00.767061 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 08:57:00.767073 | orchestrator | 2025-02-19 08:57:00.767085 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-02-19 08:57:00.767098 | orchestrator | Wednesday 19 February 2025 08:55:21 +0000 (0:00:00.860) 0:01:30.816 **** 2025-02-19 08:57:00.767110 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.767123 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.767135 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.767147 | orchestrator | 2025-02-19 08:57:00.767160 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-02-19 08:57:00.767172 | orchestrator | Wednesday 19 February 2025 08:55:22 +0000 (0:00:00.904) 0:01:31.720 **** 2025-02-19 08:57:00.767184 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.767197 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.767209 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.767222 | orchestrator | 2025-02-19 08:57:00.767234 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-02-19 08:57:00.767252 | orchestrator | Wednesday 19 February 2025 08:55:23 +0000 (0:00:00.931) 0:01:32.652 **** 2025-02-19 08:57:00.767265 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.767277 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.767290 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.767302 | orchestrator | 2025-02-19 08:57:00.767315 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-02-19 08:57:00.767327 | orchestrator | Wednesday 19 February 2025 08:55:24 +0000 (0:00:00.651) 0:01:33.304 **** 2025-02-19 08:57:00.767339 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.767351 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.767364 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.767376 | orchestrator | 2025-02-19 08:57:00.767388 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-02-19 08:57:00.767401 | orchestrator | Wednesday 19 February 2025 08:55:24 +0000 (0:00:00.684) 0:01:33.988 **** 2025-02-19 08:57:00.767413 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.767425 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.767438 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.767450 | orchestrator | 2025-02-19 08:57:00.767463 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-02-19 08:57:00.767475 | orchestrator | Wednesday 19 February 2025 08:55:25 +0000 (0:00:00.726) 0:01:34.715 **** 2025-02-19 08:57:00.767487 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.767504 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.767517 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.767529 | orchestrator | 2025-02-19 08:57:00.767541 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-02-19 08:57:00.767554 | orchestrator | Wednesday 19 February 2025 08:55:26 +0000 (0:00:00.411) 0:01:35.127 **** 2025-02-19 08:57:00.767566 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.767579 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.767591 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.767603 | orchestrator | 2025-02-19 08:57:00.767615 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-02-19 08:57:00.767628 | orchestrator | Wednesday 19 February 2025 08:55:26 +0000 (0:00:00.840) 0:01:35.968 **** 2025-02-19 08:57:00.767640 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.767652 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.767665 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.767677 | orchestrator | 2025-02-19 08:57:00.767689 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-02-19 08:57:00.767706 | orchestrator | Wednesday 19 February 2025 08:55:27 +0000 (0:00:00.669) 0:01:36.637 **** 2025-02-19 08:57:00.767719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767874 | orchestrator | 2025-02-19 08:57:00.767887 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-02-19 08:57:00.767900 | orchestrator | Wednesday 19 February 2025 08:55:29 +0000 (0:00:02.133) 0:01:38.771 **** 2025-02-19 08:57:00.767913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.767989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768044 | orchestrator | 2025-02-19 08:57:00.768056 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-02-19 08:57:00.768068 | orchestrator | Wednesday 19 February 2025 08:55:38 +0000 (0:00:08.892) 0:01:47.664 **** 2025-02-19 08:57:00.768081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.768206 | orchestrator | 2025-02-19 08:57:00.768219 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-19 08:57:00.768231 | orchestrator | Wednesday 19 February 2025 08:55:41 +0000 (0:00:03.199) 0:01:50.864 **** 2025-02-19 08:57:00.768244 | orchestrator | 2025-02-19 08:57:00.768256 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-19 08:57:00.768268 | orchestrator | Wednesday 19 February 2025 08:55:41 +0000 (0:00:00.144) 0:01:51.008 **** 2025-02-19 08:57:00.768281 | orchestrator | 2025-02-19 08:57:00.768293 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-19 08:57:00.768305 | orchestrator | Wednesday 19 February 2025 08:55:42 +0000 (0:00:00.188) 0:01:51.196 **** 2025-02-19 08:57:00.768318 | orchestrator | 2025-02-19 08:57:00.768330 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-02-19 08:57:00.768342 | orchestrator | Wednesday 19 February 2025 08:55:42 +0000 (0:00:00.307) 0:01:51.504 **** 2025-02-19 08:57:00.768354 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:57:00.768367 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:57:00.768379 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:57:00.768391 | orchestrator | 2025-02-19 08:57:00.768404 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-02-19 08:57:00.768416 | orchestrator | Wednesday 19 February 2025 08:55:50 +0000 (0:00:08.098) 0:01:59.603 **** 2025-02-19 08:57:00.768428 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:57:00.768441 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:57:00.768453 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:57:00.768466 | orchestrator | 2025-02-19 08:57:00.768485 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-02-19 08:57:00.768498 | orchestrator | Wednesday 19 February 2025 08:55:58 +0000 (0:00:08.437) 0:02:08.041 **** 2025-02-19 08:57:00.768510 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:57:00.768522 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:57:00.768535 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:57:00.768547 | orchestrator | 2025-02-19 08:57:00.768559 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-02-19 08:57:00.768572 | orchestrator | Wednesday 19 February 2025 08:56:03 +0000 (0:00:04.193) 0:02:12.234 **** 2025-02-19 08:57:00.768584 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.768596 | orchestrator | 2025-02-19 08:57:00.768609 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-02-19 08:57:00.768621 | orchestrator | Wednesday 19 February 2025 08:56:03 +0000 (0:00:00.402) 0:02:12.636 **** 2025-02-19 08:57:00.768633 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.768646 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.768658 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.768671 | orchestrator | 2025-02-19 08:57:00.768683 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-02-19 08:57:00.768695 | orchestrator | Wednesday 19 February 2025 08:56:05 +0000 (0:00:01.462) 0:02:14.099 **** 2025-02-19 08:57:00.768707 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.768720 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.768732 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:57:00.768745 | orchestrator | 2025-02-19 08:57:00.768757 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-02-19 08:57:00.768769 | orchestrator | Wednesday 19 February 2025 08:56:05 +0000 (0:00:00.651) 0:02:14.750 **** 2025-02-19 08:57:00.768781 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.768794 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.768806 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.768819 | orchestrator | 2025-02-19 08:57:00.768831 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-02-19 08:57:00.768860 | orchestrator | Wednesday 19 February 2025 08:56:06 +0000 (0:00:00.944) 0:02:15.695 **** 2025-02-19 08:57:00.768872 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.768885 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.768897 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:57:00.768909 | orchestrator | 2025-02-19 08:57:00.768922 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-02-19 08:57:00.768934 | orchestrator | Wednesday 19 February 2025 08:56:07 +0000 (0:00:00.634) 0:02:16.329 **** 2025-02-19 08:57:00.768946 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.768959 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.768977 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.768990 | orchestrator | 2025-02-19 08:57:00.769002 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-02-19 08:57:00.769015 | orchestrator | Wednesday 19 February 2025 08:56:08 +0000 (0:00:01.264) 0:02:17.594 **** 2025-02-19 08:57:00.769027 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.769040 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.769052 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.769064 | orchestrator | 2025-02-19 08:57:00.769077 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-02-19 08:57:00.769094 | orchestrator | Wednesday 19 February 2025 08:56:09 +0000 (0:00:00.981) 0:02:18.575 **** 2025-02-19 08:57:00.769107 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.769119 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.769131 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.769144 | orchestrator | 2025-02-19 08:57:00.769156 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-02-19 08:57:00.769168 | orchestrator | Wednesday 19 February 2025 08:56:10 +0000 (0:00:00.574) 0:02:19.150 **** 2025-02-19 08:57:00.769181 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769209 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769222 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769239 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769252 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769265 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769277 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769290 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769308 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769321 | orchestrator | 2025-02-19 08:57:00.769333 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-02-19 08:57:00.769345 | orchestrator | Wednesday 19 February 2025 08:56:12 +0000 (0:00:02.794) 0:02:21.944 **** 2025-02-19 08:57:00.769359 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769390 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769415 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769441 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769483 | orchestrator | 2025-02-19 08:57:00.769496 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-02-19 08:57:00.769508 | orchestrator | Wednesday 19 February 2025 08:56:20 +0000 (0:00:08.028) 0:02:29.973 **** 2025-02-19 08:57:00.769526 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769545 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769557 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769570 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769583 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769595 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769607 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769620 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769633 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 08:57:00.769645 | orchestrator | 2025-02-19 08:57:00.769658 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-19 08:57:00.769670 | orchestrator | Wednesday 19 February 2025 08:56:29 +0000 (0:00:08.535) 0:02:38.508 **** 2025-02-19 08:57:00.769682 | orchestrator | 2025-02-19 08:57:00.769695 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-19 08:57:00.769708 | orchestrator | Wednesday 19 February 2025 08:56:29 +0000 (0:00:00.047) 0:02:38.555 **** 2025-02-19 08:57:00.769720 | orchestrator | 2025-02-19 08:57:00.769732 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-19 08:57:00.769750 | orchestrator | Wednesday 19 February 2025 08:56:29 +0000 (0:00:00.132) 0:02:38.688 **** 2025-02-19 08:57:00.769763 | orchestrator | 2025-02-19 08:57:00.769775 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-02-19 08:57:00.769788 | orchestrator | Wednesday 19 February 2025 08:56:29 +0000 (0:00:00.052) 0:02:38.741 **** 2025-02-19 08:57:00.769800 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:57:00.769812 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:57:00.769825 | orchestrator | 2025-02-19 08:57:00.769859 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-02-19 08:57:00.769872 | orchestrator | Wednesday 19 February 2025 08:56:36 +0000 (0:00:06.569) 0:02:45.310 **** 2025-02-19 08:57:00.769885 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:57:00.769897 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:57:00.769910 | orchestrator | 2025-02-19 08:57:00.769922 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-02-19 08:57:00.769935 | orchestrator | Wednesday 19 February 2025 08:56:43 +0000 (0:00:07.052) 0:02:52.363 **** 2025-02-19 08:57:00.769947 | orchestrator | changed: [testbed-node-1] 2025-02-19 08:57:00.769959 | orchestrator | changed: [testbed-node-2] 2025-02-19 08:57:00.769972 | orchestrator | 2025-02-19 08:57:00.769984 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-02-19 08:57:00.769996 | orchestrator | Wednesday 19 February 2025 08:56:50 +0000 (0:00:07.494) 0:02:59.857 **** 2025-02-19 08:57:00.770009 | orchestrator | skipping: [testbed-node-0] 2025-02-19 08:57:00.770044 | orchestrator | 2025-02-19 08:57:00.770059 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-02-19 08:57:00.770071 | orchestrator | Wednesday 19 February 2025 08:56:51 +0000 (0:00:00.351) 0:03:00.208 **** 2025-02-19 08:57:00.770083 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.770107 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.770121 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.770133 | orchestrator | 2025-02-19 08:57:00.770146 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-02-19 08:57:00.770158 | orchestrator | Wednesday 19 February 2025 08:56:52 +0000 (0:00:00.909) 0:03:01.118 **** 2025-02-19 08:57:00.770171 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.770183 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.770197 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:57:00.770217 | orchestrator | 2025-02-19 08:57:00.770230 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-02-19 08:57:00.770242 | orchestrator | Wednesday 19 February 2025 08:56:52 +0000 (0:00:00.670) 0:03:01.789 **** 2025-02-19 08:57:00.770255 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.770269 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.770282 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.770294 | orchestrator | 2025-02-19 08:57:00.770307 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-02-19 08:57:00.770319 | orchestrator | Wednesday 19 February 2025 08:56:53 +0000 (0:00:01.267) 0:03:03.057 **** 2025-02-19 08:57:00.770332 | orchestrator | skipping: [testbed-node-1] 2025-02-19 08:57:00.770344 | orchestrator | skipping: [testbed-node-2] 2025-02-19 08:57:00.770357 | orchestrator | changed: [testbed-node-0] 2025-02-19 08:57:00.770369 | orchestrator | 2025-02-19 08:57:00.770381 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-02-19 08:57:00.770394 | orchestrator | Wednesday 19 February 2025 08:56:54 +0000 (0:00:00.653) 0:03:03.710 **** 2025-02-19 08:57:00.770406 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.770419 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.770431 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.770443 | orchestrator | 2025-02-19 08:57:00.770460 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-02-19 08:57:00.770473 | orchestrator | Wednesday 19 February 2025 08:56:56 +0000 (0:00:01.748) 0:03:05.458 **** 2025-02-19 08:57:00.770492 | orchestrator | ok: [testbed-node-0] 2025-02-19 08:57:00.770504 | orchestrator | ok: [testbed-node-1] 2025-02-19 08:57:00.770516 | orchestrator | ok: [testbed-node-2] 2025-02-19 08:57:00.770528 | orchestrator | 2025-02-19 08:57:00.770541 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 08:57:00.770553 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-02-19 08:57:00.770566 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-02-19 08:57:00.770579 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-02-19 08:57:00.770592 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:57:00.770610 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:57:00.770622 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 08:57:00.770635 | orchestrator | 2025-02-19 08:57:00.770647 | orchestrator | 2025-02-19 08:57:00.770660 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 08:57:00.770672 | orchestrator | Wednesday 19 February 2025 08:56:59 +0000 (0:00:02.703) 0:03:08.162 **** 2025-02-19 08:57:00.770685 | orchestrator | =============================================================================== 2025-02-19 08:57:00.770698 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.09s 2025-02-19 08:57:00.770710 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.65s 2025-02-19 08:57:00.770722 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.49s 2025-02-19 08:57:00.770735 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.67s 2025-02-19 08:57:00.770747 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 11.69s 2025-02-19 08:57:00.770760 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 8.90s 2025-02-19 08:57:00.770772 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 8.54s 2025-02-19 08:57:00.770790 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 8.03s 2025-02-19 08:57:03.816998 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.56s 2025-02-19 08:57:03.817127 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.19s 2025-02-19 08:57:03.817147 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 3.08s 2025-02-19 08:57:03.817170 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.94s 2025-02-19 08:57:03.817194 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.79s 2025-02-19 08:57:03.817219 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.74s 2025-02-19 08:57:03.817244 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.73s 2025-02-19 08:57:03.817268 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 2.70s 2025-02-19 08:57:03.817293 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.68s 2025-02-19 08:57:03.817317 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.63s 2025-02-19 08:57:03.817336 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 2.56s 2025-02-19 08:57:03.817350 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 2.46s 2025-02-19 08:57:03.817365 | orchestrator | 2025-02-19 08:57:00 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:03.817407 | orchestrator | 2025-02-19 08:57:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:03.817441 | orchestrator | 2025-02-19 08:57:03 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:03.820079 | orchestrator | 2025-02-19 08:57:03 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:03.829256 | orchestrator | 2025-02-19 08:57:03 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:06.881721 | orchestrator | 2025-02-19 08:57:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:06.881968 | orchestrator | 2025-02-19 08:57:06 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:06.882125 | orchestrator | 2025-02-19 08:57:06 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:06.883548 | orchestrator | 2025-02-19 08:57:06 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:09.932839 | orchestrator | 2025-02-19 08:57:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:09.933052 | orchestrator | 2025-02-19 08:57:09 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:09.934400 | orchestrator | 2025-02-19 08:57:09 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:09.936134 | orchestrator | 2025-02-19 08:57:09 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:09.936250 | orchestrator | 2025-02-19 08:57:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:12.980730 | orchestrator | 2025-02-19 08:57:12 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:12.983416 | orchestrator | 2025-02-19 08:57:12 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:12.988076 | orchestrator | 2025-02-19 08:57:12 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:16.041585 | orchestrator | 2025-02-19 08:57:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:16.041733 | orchestrator | 2025-02-19 08:57:16 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:16.045146 | orchestrator | 2025-02-19 08:57:16 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:16.045267 | orchestrator | 2025-02-19 08:57:16 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:19.093741 | orchestrator | 2025-02-19 08:57:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:19.093972 | orchestrator | 2025-02-19 08:57:19 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:19.095601 | orchestrator | 2025-02-19 08:57:19 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:19.102355 | orchestrator | 2025-02-19 08:57:19 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:22.165615 | orchestrator | 2025-02-19 08:57:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:22.165774 | orchestrator | 2025-02-19 08:57:22 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:22.166891 | orchestrator | 2025-02-19 08:57:22 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:22.170341 | orchestrator | 2025-02-19 08:57:22 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:25.233838 | orchestrator | 2025-02-19 08:57:22 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:25.234013 | orchestrator | 2025-02-19 08:57:25 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:25.235194 | orchestrator | 2025-02-19 08:57:25 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:25.235296 | orchestrator | 2025-02-19 08:57:25 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:25.235588 | orchestrator | 2025-02-19 08:57:25 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:28.280356 | orchestrator | 2025-02-19 08:57:28 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:28.285621 | orchestrator | 2025-02-19 08:57:28 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:31.331579 | orchestrator | 2025-02-19 08:57:28 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:31.331687 | orchestrator | 2025-02-19 08:57:28 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:31.331716 | orchestrator | 2025-02-19 08:57:31 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:31.332434 | orchestrator | 2025-02-19 08:57:31 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:31.335002 | orchestrator | 2025-02-19 08:57:31 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:34.375027 | orchestrator | 2025-02-19 08:57:31 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:34.375160 | orchestrator | 2025-02-19 08:57:34 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:34.375958 | orchestrator | 2025-02-19 08:57:34 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:34.378473 | orchestrator | 2025-02-19 08:57:34 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:37.431191 | orchestrator | 2025-02-19 08:57:34 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:37.431342 | orchestrator | 2025-02-19 08:57:37 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:37.431431 | orchestrator | 2025-02-19 08:57:37 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:37.432317 | orchestrator | 2025-02-19 08:57:37 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:40.490271 | orchestrator | 2025-02-19 08:57:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:40.490380 | orchestrator | 2025-02-19 08:57:40 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:40.491078 | orchestrator | 2025-02-19 08:57:40 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:40.491369 | orchestrator | 2025-02-19 08:57:40 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:43.564080 | orchestrator | 2025-02-19 08:57:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:43.564226 | orchestrator | 2025-02-19 08:57:43 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:43.565015 | orchestrator | 2025-02-19 08:57:43 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:43.565058 | orchestrator | 2025-02-19 08:57:43 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:46.607126 | orchestrator | 2025-02-19 08:57:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:46.607285 | orchestrator | 2025-02-19 08:57:46 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:46.610161 | orchestrator | 2025-02-19 08:57:46 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:46.611733 | orchestrator | 2025-02-19 08:57:46 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:49.661152 | orchestrator | 2025-02-19 08:57:46 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:49.661289 | orchestrator | 2025-02-19 08:57:49 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:49.661375 | orchestrator | 2025-02-19 08:57:49 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:49.662484 | orchestrator | 2025-02-19 08:57:49 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:52.713482 | orchestrator | 2025-02-19 08:57:49 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:52.713610 | orchestrator | 2025-02-19 08:57:52 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:52.713931 | orchestrator | 2025-02-19 08:57:52 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:52.718132 | orchestrator | 2025-02-19 08:57:52 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:55.773590 | orchestrator | 2025-02-19 08:57:52 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:55.773744 | orchestrator | 2025-02-19 08:57:55 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:55.773835 | orchestrator | 2025-02-19 08:57:55 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:55.774305 | orchestrator | 2025-02-19 08:57:55 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:57:58.822221 | orchestrator | 2025-02-19 08:57:55 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:57:58.822376 | orchestrator | 2025-02-19 08:57:58 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:57:58.823763 | orchestrator | 2025-02-19 08:57:58 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:57:58.823803 | orchestrator | 2025-02-19 08:57:58 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:01.877250 | orchestrator | 2025-02-19 08:57:58 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:01.877361 | orchestrator | 2025-02-19 08:58:01 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:01.878353 | orchestrator | 2025-02-19 08:58:01 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:01.879602 | orchestrator | 2025-02-19 08:58:01 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:04.931567 | orchestrator | 2025-02-19 08:58:01 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:04.931751 | orchestrator | 2025-02-19 08:58:04 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:04.933234 | orchestrator | 2025-02-19 08:58:04 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:04.934442 | orchestrator | 2025-02-19 08:58:04 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:07.985834 | orchestrator | 2025-02-19 08:58:04 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:07.985982 | orchestrator | 2025-02-19 08:58:07 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:07.986809 | orchestrator | 2025-02-19 08:58:07 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:07.986850 | orchestrator | 2025-02-19 08:58:07 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:11.032531 | orchestrator | 2025-02-19 08:58:07 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:11.032700 | orchestrator | 2025-02-19 08:58:11 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:11.033661 | orchestrator | 2025-02-19 08:58:11 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:11.035352 | orchestrator | 2025-02-19 08:58:11 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:14.094331 | orchestrator | 2025-02-19 08:58:11 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:14.094478 | orchestrator | 2025-02-19 08:58:14 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:17.145365 | orchestrator | 2025-02-19 08:58:14 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:17.145493 | orchestrator | 2025-02-19 08:58:14 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:17.145513 | orchestrator | 2025-02-19 08:58:14 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:17.145547 | orchestrator | 2025-02-19 08:58:17 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:17.145828 | orchestrator | 2025-02-19 08:58:17 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:17.147097 | orchestrator | 2025-02-19 08:58:17 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:20.195753 | orchestrator | 2025-02-19 08:58:17 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:20.195963 | orchestrator | 2025-02-19 08:58:20 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:20.196178 | orchestrator | 2025-02-19 08:58:20 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:20.197714 | orchestrator | 2025-02-19 08:58:20 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:23.244833 | orchestrator | 2025-02-19 08:58:20 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:23.245032 | orchestrator | 2025-02-19 08:58:23 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:23.247433 | orchestrator | 2025-02-19 08:58:23 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:23.248730 | orchestrator | 2025-02-19 08:58:23 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:26.297739 | orchestrator | 2025-02-19 08:58:23 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:26.297886 | orchestrator | 2025-02-19 08:58:26 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:29.352979 | orchestrator | 2025-02-19 08:58:26 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:29.353085 | orchestrator | 2025-02-19 08:58:26 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:29.353099 | orchestrator | 2025-02-19 08:58:26 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:29.353124 | orchestrator | 2025-02-19 08:58:29 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:29.354075 | orchestrator | 2025-02-19 08:58:29 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:29.355467 | orchestrator | 2025-02-19 08:58:29 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:32.403827 | orchestrator | 2025-02-19 08:58:29 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:32.404058 | orchestrator | 2025-02-19 08:58:32 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:32.406561 | orchestrator | 2025-02-19 08:58:32 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:35.454478 | orchestrator | 2025-02-19 08:58:32 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:35.454608 | orchestrator | 2025-02-19 08:58:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:35.454647 | orchestrator | 2025-02-19 08:58:35 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:35.455720 | orchestrator | 2025-02-19 08:58:35 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:35.457414 | orchestrator | 2025-02-19 08:58:35 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:38.498280 | orchestrator | 2025-02-19 08:58:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:38.498402 | orchestrator | 2025-02-19 08:58:38 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:38.503146 | orchestrator | 2025-02-19 08:58:38 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:41.551401 | orchestrator | 2025-02-19 08:58:38 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:41.551534 | orchestrator | 2025-02-19 08:58:38 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:41.551575 | orchestrator | 2025-02-19 08:58:41 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:41.552895 | orchestrator | 2025-02-19 08:58:41 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:41.553757 | orchestrator | 2025-02-19 08:58:41 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:41.554368 | orchestrator | 2025-02-19 08:58:41 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:44.616978 | orchestrator | 2025-02-19 08:58:44 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:44.621088 | orchestrator | 2025-02-19 08:58:44 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:44.622183 | orchestrator | 2025-02-19 08:58:44 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:44.622346 | orchestrator | 2025-02-19 08:58:44 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:47.680836 | orchestrator | 2025-02-19 08:58:47 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:47.681459 | orchestrator | 2025-02-19 08:58:47 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:47.682279 | orchestrator | 2025-02-19 08:58:47 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:47.682478 | orchestrator | 2025-02-19 08:58:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:50.721833 | orchestrator | 2025-02-19 08:58:50 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:50.722523 | orchestrator | 2025-02-19 08:58:50 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:50.723826 | orchestrator | 2025-02-19 08:58:50 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:50.724012 | orchestrator | 2025-02-19 08:58:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:53.771790 | orchestrator | 2025-02-19 08:58:53 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:53.773829 | orchestrator | 2025-02-19 08:58:53 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:53.775465 | orchestrator | 2025-02-19 08:58:53 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:56.827105 | orchestrator | 2025-02-19 08:58:53 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:56.827276 | orchestrator | 2025-02-19 08:58:56 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:56.829687 | orchestrator | 2025-02-19 08:58:56 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:56.831038 | orchestrator | 2025-02-19 08:58:56 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:59.872244 | orchestrator | 2025-02-19 08:58:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:58:59.872340 | orchestrator | 2025-02-19 08:58:59 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:58:59.877694 | orchestrator | 2025-02-19 08:58:59 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:58:59.878391 | orchestrator | 2025-02-19 08:58:59 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:58:59.878489 | orchestrator | 2025-02-19 08:58:59 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:02.919512 | orchestrator | 2025-02-19 08:59:02 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:02.920830 | orchestrator | 2025-02-19 08:59:02 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:02.922559 | orchestrator | 2025-02-19 08:59:02 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:05.982660 | orchestrator | 2025-02-19 08:59:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:05.982792 | orchestrator | 2025-02-19 08:59:05 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:05.984061 | orchestrator | 2025-02-19 08:59:05 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:05.985066 | orchestrator | 2025-02-19 08:59:05 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:09.040989 | orchestrator | 2025-02-19 08:59:05 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:09.041143 | orchestrator | 2025-02-19 08:59:09 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:12.093138 | orchestrator | 2025-02-19 08:59:09 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:12.093381 | orchestrator | 2025-02-19 08:59:09 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:12.093429 | orchestrator | 2025-02-19 08:59:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:12.093469 | orchestrator | 2025-02-19 08:59:12 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:12.093561 | orchestrator | 2025-02-19 08:59:12 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:12.094630 | orchestrator | 2025-02-19 08:59:12 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:12.094793 | orchestrator | 2025-02-19 08:59:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:15.149429 | orchestrator | 2025-02-19 08:59:15 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:15.149624 | orchestrator | 2025-02-19 08:59:15 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:15.149657 | orchestrator | 2025-02-19 08:59:15 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:18.193500 | orchestrator | 2025-02-19 08:59:15 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:18.193649 | orchestrator | 2025-02-19 08:59:18 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:18.197185 | orchestrator | 2025-02-19 08:59:18 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:18.197230 | orchestrator | 2025-02-19 08:59:18 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:21.240647 | orchestrator | 2025-02-19 08:59:18 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:21.240855 | orchestrator | 2025-02-19 08:59:21 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:21.704483 | orchestrator | 2025-02-19 08:59:21 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:24.305901 | orchestrator | 2025-02-19 08:59:21 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:24.306151 | orchestrator | 2025-02-19 08:59:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:24.306199 | orchestrator | 2025-02-19 08:59:24 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:24.306327 | orchestrator | 2025-02-19 08:59:24 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:24.307128 | orchestrator | 2025-02-19 08:59:24 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:27.367865 | orchestrator | 2025-02-19 08:59:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:27.368022 | orchestrator | 2025-02-19 08:59:27 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:27.370088 | orchestrator | 2025-02-19 08:59:27 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:27.373385 | orchestrator | 2025-02-19 08:59:27 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:27.374313 | orchestrator | 2025-02-19 08:59:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:30.419218 | orchestrator | 2025-02-19 08:59:30 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:30.423667 | orchestrator | 2025-02-19 08:59:30 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:30.425736 | orchestrator | 2025-02-19 08:59:30 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:33.478533 | orchestrator | 2025-02-19 08:59:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:33.478685 | orchestrator | 2025-02-19 08:59:33 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:33.479478 | orchestrator | 2025-02-19 08:59:33 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:33.482422 | orchestrator | 2025-02-19 08:59:33 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:36.525220 | orchestrator | 2025-02-19 08:59:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:36.525346 | orchestrator | 2025-02-19 08:59:36 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:36.526008 | orchestrator | 2025-02-19 08:59:36 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:36.527520 | orchestrator | 2025-02-19 08:59:36 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:36.527876 | orchestrator | 2025-02-19 08:59:36 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:39.569215 | orchestrator | 2025-02-19 08:59:39 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:39.572360 | orchestrator | 2025-02-19 08:59:39 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:42.615494 | orchestrator | 2025-02-19 08:59:39 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:42.615624 | orchestrator | 2025-02-19 08:59:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:42.615663 | orchestrator | 2025-02-19 08:59:42 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:42.618532 | orchestrator | 2025-02-19 08:59:42 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:42.619342 | orchestrator | 2025-02-19 08:59:42 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:42.619913 | orchestrator | 2025-02-19 08:59:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:45.676906 | orchestrator | 2025-02-19 08:59:45 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:45.682764 | orchestrator | 2025-02-19 08:59:45 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:45.684787 | orchestrator | 2025-02-19 08:59:45 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:48.756346 | orchestrator | 2025-02-19 08:59:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:48.756540 | orchestrator | 2025-02-19 08:59:48 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:48.756640 | orchestrator | 2025-02-19 08:59:48 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:51.798634 | orchestrator | 2025-02-19 08:59:48 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:51.798762 | orchestrator | 2025-02-19 08:59:48 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:51.798803 | orchestrator | 2025-02-19 08:59:51 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:51.803281 | orchestrator | 2025-02-19 08:59:51 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:54.857597 | orchestrator | 2025-02-19 08:59:51 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:54.857698 | orchestrator | 2025-02-19 08:59:51 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:54.857743 | orchestrator | 2025-02-19 08:59:54 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:54.860848 | orchestrator | 2025-02-19 08:59:54 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:54.861013 | orchestrator | 2025-02-19 08:59:54 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:57.898309 | orchestrator | 2025-02-19 08:59:54 | INFO  | Wait 1 second(s) until the next check 2025-02-19 08:59:57.898487 | orchestrator | 2025-02-19 08:59:57 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 08:59:57.899130 | orchestrator | 2025-02-19 08:59:57 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 08:59:57.900188 | orchestrator | 2025-02-19 08:59:57 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 08:59:57.900445 | orchestrator | 2025-02-19 08:59:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:00.939570 | orchestrator | 2025-02-19 09:00:00 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:00.941827 | orchestrator | 2025-02-19 09:00:00 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:00.944256 | orchestrator | 2025-02-19 09:00:00 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:03.982259 | orchestrator | 2025-02-19 09:00:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:03.982404 | orchestrator | 2025-02-19 09:00:03 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:03.984195 | orchestrator | 2025-02-19 09:00:03 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:03.988825 | orchestrator | 2025-02-19 09:00:03 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:07.030218 | orchestrator | 2025-02-19 09:00:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:07.030386 | orchestrator | 2025-02-19 09:00:07 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:07.031737 | orchestrator | 2025-02-19 09:00:07 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:07.033901 | orchestrator | 2025-02-19 09:00:07 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:07.034232 | orchestrator | 2025-02-19 09:00:07 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:10.080123 | orchestrator | 2025-02-19 09:00:10 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:10.081603 | orchestrator | 2025-02-19 09:00:10 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:10.083503 | orchestrator | 2025-02-19 09:00:10 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:13.155441 | orchestrator | 2025-02-19 09:00:10 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:13.155582 | orchestrator | 2025-02-19 09:00:13 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:13.159299 | orchestrator | 2025-02-19 09:00:13 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:13.159360 | orchestrator | 2025-02-19 09:00:13 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:16.191714 | orchestrator | 2025-02-19 09:00:13 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:16.191849 | orchestrator | 2025-02-19 09:00:16 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:16.192270 | orchestrator | 2025-02-19 09:00:16 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:16.193244 | orchestrator | 2025-02-19 09:00:16 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:19.237133 | orchestrator | 2025-02-19 09:00:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:19.237234 | orchestrator | 2025-02-19 09:00:19 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:19.237323 | orchestrator | 2025-02-19 09:00:19 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:19.237572 | orchestrator | 2025-02-19 09:00:19 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:22.281802 | orchestrator | 2025-02-19 09:00:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:22.281937 | orchestrator | 2025-02-19 09:00:22 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:22.283014 | orchestrator | 2025-02-19 09:00:22 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:25.325848 | orchestrator | 2025-02-19 09:00:22 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:25.325949 | orchestrator | 2025-02-19 09:00:22 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:25.326002 | orchestrator | 2025-02-19 09:00:25 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:25.327408 | orchestrator | 2025-02-19 09:00:25 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:28.373219 | orchestrator | 2025-02-19 09:00:25 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:28.373351 | orchestrator | 2025-02-19 09:00:25 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:28.373389 | orchestrator | 2025-02-19 09:00:28 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:28.378094 | orchestrator | 2025-02-19 09:00:28 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:28.379350 | orchestrator | 2025-02-19 09:00:28 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:31.418789 | orchestrator | 2025-02-19 09:00:28 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:31.418927 | orchestrator | 2025-02-19 09:00:31 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:31.419803 | orchestrator | 2025-02-19 09:00:31 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:31.421648 | orchestrator | 2025-02-19 09:00:31 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:31.421835 | orchestrator | 2025-02-19 09:00:31 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:34.482878 | orchestrator | 2025-02-19 09:00:34 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:34.484268 | orchestrator | 2025-02-19 09:00:34 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:34.486707 | orchestrator | 2025-02-19 09:00:34 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:34.487015 | orchestrator | 2025-02-19 09:00:34 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:37.533976 | orchestrator | 2025-02-19 09:00:37 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:37.535299 | orchestrator | 2025-02-19 09:00:37 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:37.536583 | orchestrator | 2025-02-19 09:00:37 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:40.595904 | orchestrator | 2025-02-19 09:00:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:40.596105 | orchestrator | 2025-02-19 09:00:40 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state STARTED 2025-02-19 09:00:40.596595 | orchestrator | 2025-02-19 09:00:40 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:40.600517 | orchestrator | 2025-02-19 09:00:40 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:40.601556 | orchestrator | 2025-02-19 09:00:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:43.664659 | orchestrator | 2025-02-19 09:00:43 | INFO  | Task ef7e6e63-b609-48cb-a87b-4bfb9584a28f is in state SUCCESS 2025-02-19 09:00:43.667640 | orchestrator | 2025-02-19 09:00:43.667728 | orchestrator | 2025-02-19 09:00:43.667749 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:00:43.667765 | orchestrator | 2025-02-19 09:00:43.667780 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:00:43.667794 | orchestrator | Wednesday 19 February 2025 08:52:10 +0000 (0:00:00.788) 0:00:00.788 **** 2025-02-19 09:00:43.667808 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:00:43.667831 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:00:43.667872 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:00:43.667896 | orchestrator | 2025-02-19 09:00:43.667918 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:00:43.667945 | orchestrator | Wednesday 19 February 2025 08:52:11 +0000 (0:00:00.943) 0:00:01.732 **** 2025-02-19 09:00:43.667972 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-02-19 09:00:43.668265 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-02-19 09:00:43.668300 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-02-19 09:00:43.668324 | orchestrator | 2025-02-19 09:00:43.668349 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-02-19 09:00:43.668372 | orchestrator | 2025-02-19 09:00:43.668406 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-02-19 09:00:43.668429 | orchestrator | Wednesday 19 February 2025 08:52:12 +0000 (0:00:00.865) 0:00:02.597 **** 2025-02-19 09:00:43.668453 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.668477 | orchestrator | 2025-02-19 09:00:43.668500 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-02-19 09:00:43.668523 | orchestrator | Wednesday 19 February 2025 08:52:13 +0000 (0:00:01.599) 0:00:04.196 **** 2025-02-19 09:00:43.668547 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:00:43.668642 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:00:43.668660 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:00:43.668674 | orchestrator | 2025-02-19 09:00:43.668687 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-02-19 09:00:43.668702 | orchestrator | Wednesday 19 February 2025 08:52:16 +0000 (0:00:02.115) 0:00:06.312 **** 2025-02-19 09:00:43.668716 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.668730 | orchestrator | 2025-02-19 09:00:43.668744 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-02-19 09:00:43.668758 | orchestrator | Wednesday 19 February 2025 08:52:17 +0000 (0:00:01.721) 0:00:08.034 **** 2025-02-19 09:00:43.668772 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:00:43.668785 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:00:43.668883 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:00:43.668911 | orchestrator | 2025-02-19 09:00:43.668937 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-02-19 09:00:43.668961 | orchestrator | Wednesday 19 February 2025 08:52:18 +0000 (0:00:01.118) 0:00:09.153 **** 2025-02-19 09:00:43.669169 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-19 09:00:43.669199 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-19 09:00:43.669214 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-19 09:00:43.669248 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-19 09:00:43.669263 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-19 09:00:43.669277 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-19 09:00:43.669291 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-19 09:00:43.669306 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-19 09:00:43.669320 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-19 09:00:43.669334 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-19 09:00:43.669402 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-19 09:00:43.669419 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-19 09:00:43.669499 | orchestrator | 2025-02-19 09:00:43.669556 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-19 09:00:43.669578 | orchestrator | Wednesday 19 February 2025 08:52:21 +0000 (0:00:02.930) 0:00:12.083 **** 2025-02-19 09:00:43.669598 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-02-19 09:00:43.669618 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-02-19 09:00:43.669639 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-02-19 09:00:43.669697 | orchestrator | 2025-02-19 09:00:43.669719 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-19 09:00:43.669741 | orchestrator | Wednesday 19 February 2025 08:52:23 +0000 (0:00:01.490) 0:00:13.577 **** 2025-02-19 09:00:43.669855 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-02-19 09:00:43.669886 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-02-19 09:00:43.669901 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-02-19 09:00:43.669913 | orchestrator | 2025-02-19 09:00:43.669926 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-19 09:00:43.669939 | orchestrator | Wednesday 19 February 2025 08:52:25 +0000 (0:00:02.207) 0:00:15.784 **** 2025-02-19 09:00:43.669951 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-02-19 09:00:43.669964 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.670175 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-02-19 09:00:43.670191 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.670212 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-02-19 09:00:43.670225 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.670237 | orchestrator | 2025-02-19 09:00:43.670250 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-02-19 09:00:43.670275 | orchestrator | Wednesday 19 February 2025 08:52:26 +0000 (0:00:00.636) 0:00:16.421 **** 2025-02-19 09:00:43.670291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.670310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.670336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.670377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.670393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.670406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.670460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.670475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.670495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.670541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.670556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.670569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.670582 | orchestrator | 2025-02-19 09:00:43.670595 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-02-19 09:00:43.670608 | orchestrator | Wednesday 19 February 2025 08:52:28 +0000 (0:00:02.669) 0:00:19.091 **** 2025-02-19 09:00:43.670620 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:00:43.670633 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:00:43.670645 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:00:43.670657 | orchestrator | 2025-02-19 09:00:43.670670 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-02-19 09:00:43.670682 | orchestrator | Wednesday 19 February 2025 08:52:31 +0000 (0:00:02.941) 0:00:22.032 **** 2025-02-19 09:00:43.670694 | orchestrator | skipping: [testbed-node-0] => (item=users)  2025-02-19 09:00:43.670706 | orchestrator | skipping: [testbed-node-1] => (item=users)  2025-02-19 09:00:43.670747 | orchestrator | skipping: [testbed-node-2] => (item=users)  2025-02-19 09:00:43.670797 | orchestrator | skipping: [testbed-node-1] => (item=rules)  2025-02-19 09:00:43.670811 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.670823 | orchestrator | skipping: [testbed-node-0] => (item=rules)  2025-02-19 09:00:43.670836 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.670848 | orchestrator | skipping: [testbed-node-2] => (item=rules)  2025-02-19 09:00:43.670860 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.670873 | orchestrator | 2025-02-19 09:00:43.670885 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-02-19 09:00:43.670897 | orchestrator | Wednesday 19 February 2025 08:52:37 +0000 (0:00:05.284) 0:00:27.317 **** 2025-02-19 09:00:43.670916 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:00:43.670929 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:00:43.670973 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:00:43.671042 | orchestrator | 2025-02-19 09:00:43.671059 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-02-19 09:00:43.671071 | orchestrator | Wednesday 19 February 2025 08:52:43 +0000 (0:00:06.573) 0:00:33.891 **** 2025-02-19 09:00:43.671143 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.671156 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.671169 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.671182 | orchestrator | 2025-02-19 09:00:43.671216 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-02-19 09:00:43.671231 | orchestrator | Wednesday 19 February 2025 08:52:49 +0000 (0:00:05.438) 0:00:39.329 **** 2025-02-19 09:00:43.671244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-19 09:00:43.671258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-19 09:00:43.671271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-19 09:00:43.671284 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-19 09:00:43.671306 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-19 09:00:43.671352 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-19 09:00:43.671366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.671413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.671427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.671451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.671465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.671484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.671515 | orchestrator | 2025-02-19 09:00:43.671529 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-02-19 09:00:43.671541 | orchestrator | Wednesday 19 February 2025 08:52:57 +0000 (0:00:08.236) 0:00:47.565 **** 2025-02-19 09:00:43.671554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.671566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.671577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.671587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.671598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.671608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.671652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.671665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.671712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.671729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.671740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.671751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.671767 | orchestrator | 2025-02-19 09:00:43.671777 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-02-19 09:00:43.671788 | orchestrator | Wednesday 19 February 2025 08:53:03 +0000 (0:00:05.805) 0:00:53.371 **** 2025-02-19 09:00:43.671808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.671841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.671853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.671864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.671875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.671929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.671947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.671966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.671977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.672002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.672013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.672024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.672034 | orchestrator | 2025-02-19 09:00:43.672045 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-02-19 09:00:43.672068 | orchestrator | Wednesday 19 February 2025 08:53:06 +0000 (0:00:03.224) 0:00:56.595 **** 2025-02-19 09:00:43.672087 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-19 09:00:43.672099 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-19 09:00:43.672109 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-19 09:00:43.672119 | orchestrator | 2025-02-19 09:00:43.672129 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-02-19 09:00:43.672139 | orchestrator | Wednesday 19 February 2025 08:53:09 +0000 (0:00:03.063) 0:00:59.658 **** 2025-02-19 09:00:43.672149 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-19 09:00:43.672226 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.672246 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-19 09:00:43.672265 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.672283 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-19 09:00:43.672301 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.672317 | orchestrator | 2025-02-19 09:00:43.672393 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-02-19 09:00:43.672405 | orchestrator | Wednesday 19 February 2025 08:53:10 +0000 (0:00:01.393) 0:01:01.052 **** 2025-02-19 09:00:43.672429 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.672440 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.672456 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.672467 | orchestrator | 2025-02-19 09:00:43.672477 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-02-19 09:00:43.672487 | orchestrator | Wednesday 19 February 2025 08:53:13 +0000 (0:00:02.866) 0:01:03.919 **** 2025-02-19 09:00:43.672498 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-19 09:00:43.672509 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-19 09:00:43.672520 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-19 09:00:43.672530 | orchestrator | 2025-02-19 09:00:43.672540 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-02-19 09:00:43.672575 | orchestrator | Wednesday 19 February 2025 08:53:18 +0000 (0:00:04.482) 0:01:08.401 **** 2025-02-19 09:00:43.672586 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-19 09:00:43.672596 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-19 09:00:43.672606 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-19 09:00:43.672640 | orchestrator | 2025-02-19 09:00:43.672653 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-02-19 09:00:43.672664 | orchestrator | Wednesday 19 February 2025 08:53:21 +0000 (0:00:03.056) 0:01:11.458 **** 2025-02-19 09:00:43.672698 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-02-19 09:00:43.672743 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-02-19 09:00:43.672755 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-02-19 09:00:43.672765 | orchestrator | 2025-02-19 09:00:43.672775 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-02-19 09:00:43.672786 | orchestrator | Wednesday 19 February 2025 08:53:23 +0000 (0:00:02.132) 0:01:13.590 **** 2025-02-19 09:00:43.672796 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-02-19 09:00:43.672806 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-02-19 09:00:43.672868 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-02-19 09:00:43.672881 | orchestrator | 2025-02-19 09:00:43.672892 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-02-19 09:00:43.672902 | orchestrator | Wednesday 19 February 2025 08:53:25 +0000 (0:00:02.325) 0:01:15.915 **** 2025-02-19 09:00:43.672912 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.672922 | orchestrator | 2025-02-19 09:00:43.672933 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-02-19 09:00:43.672972 | orchestrator | Wednesday 19 February 2025 08:53:26 +0000 (0:00:00.802) 0:01:16.717 **** 2025-02-19 09:00:43.672985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.673018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.673038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.673049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.673060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.673071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.673088 | orchestrator | 2025-02-19 09:00:43.673098 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-02-19 09:00:43.673109 | orchestrator | Wednesday 19 February 2025 08:53:29 +0000 (0:00:02.577) 0:01:19.295 **** 2025-02-19 09:00:43.673119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-19 09:00:43.673191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.673203 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.673214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-19 09:00:43.673225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.673235 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.673254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-19 09:00:43.673265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.673275 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.673294 | orchestrator | 2025-02-19 09:00:43.673304 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-02-19 09:00:43.673315 | orchestrator | Wednesday 19 February 2025 08:53:30 +0000 (0:00:01.000) 0:01:20.296 **** 2025-02-19 09:00:43.673329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-19 09:00:43.673340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.673350 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.673361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-19 09:00:43.673372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.673382 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.673393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-19 09:00:43.673409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.673419 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.673468 | orchestrator | 2025-02-19 09:00:43.673486 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-02-19 09:00:43.673559 | orchestrator | Wednesday 19 February 2025 08:53:31 +0000 (0:00:01.563) 0:01:21.859 **** 2025-02-19 09:00:43.673581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-19 09:00:43.673601 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-19 09:00:43.673619 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-19 09:00:43.673630 | orchestrator | 2025-02-19 09:00:43.673641 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-02-19 09:00:43.673651 | orchestrator | Wednesday 19 February 2025 08:53:34 +0000 (0:00:02.496) 0:01:24.356 **** 2025-02-19 09:00:43.673686 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-19 09:00:43.673701 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.673718 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-19 09:00:43.673736 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.673760 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-19 09:00:43.673776 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.673792 | orchestrator | 2025-02-19 09:00:43.673809 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-02-19 09:00:43.673826 | orchestrator | Wednesday 19 February 2025 08:53:35 +0000 (0:00:01.355) 0:01:25.711 **** 2025-02-19 09:00:43.673902 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-19 09:00:43.673922 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-19 09:00:43.673942 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-19 09:00:43.673960 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-19 09:00:43.673981 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.674101 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-19 09:00:43.674120 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.674138 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-19 09:00:43.674219 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.674240 | orchestrator | 2025-02-19 09:00:43.674259 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-02-19 09:00:43.674276 | orchestrator | Wednesday 19 February 2025 08:53:37 +0000 (0:00:02.466) 0:01:28.177 **** 2025-02-19 09:00:43.674327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.674401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.674446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.674474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-19 09:00:43.674492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.674511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-19 09:00:43.674528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.674547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.674594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.674627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.674649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-19 09:00:43.674668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec', '__omit_place_holder__98e5a729a3e402706e29e39364d3348806ca6eec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-19 09:00:43.674683 | orchestrator | 2025-02-19 09:00:43.674697 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-02-19 09:00:43.674712 | orchestrator | Wednesday 19 February 2025 08:53:41 +0000 (0:00:03.437) 0:01:31.614 **** 2025-02-19 09:00:43.674725 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.674779 | orchestrator | 2025-02-19 09:00:43.674800 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-02-19 09:00:43.674813 | orchestrator | Wednesday 19 February 2025 08:53:42 +0000 (0:00:00.757) 0:01:32.372 **** 2025-02-19 09:00:43.674828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-19 09:00:43.674844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.674875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.674891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.674953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-19 09:00:43.674974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.675015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-19 09:00:43.675080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.675097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675117 | orchestrator | 2025-02-19 09:00:43.675126 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-02-19 09:00:43.675135 | orchestrator | Wednesday 19 February 2025 08:53:45 +0000 (0:00:03.524) 0:01:35.897 **** 2025-02-19 09:00:43.675148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-19 09:00:43.675157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.675171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675200 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.675214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-19 09:00:43.675230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.675248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675272 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.675281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-19 09:00:43.675331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.675341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675359 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.675367 | orchestrator | 2025-02-19 09:00:43.675437 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-02-19 09:00:43.675448 | orchestrator | Wednesday 19 February 2025 08:53:46 +0000 (0:00:00.783) 0:01:36.680 **** 2025-02-19 09:00:43.675457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-19 09:00:43.675466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-19 09:00:43.675475 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.675525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-19 09:00:43.675540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-19 09:00:43.675549 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.675558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-19 09:00:43.675566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-19 09:00:43.675575 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.675583 | orchestrator | 2025-02-19 09:00:43.675592 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-02-19 09:00:43.675600 | orchestrator | Wednesday 19 February 2025 08:53:47 +0000 (0:00:00.950) 0:01:37.630 **** 2025-02-19 09:00:43.675609 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.675618 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.675626 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.675635 | orchestrator | 2025-02-19 09:00:43.675644 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-02-19 09:00:43.675652 | orchestrator | Wednesday 19 February 2025 08:53:47 +0000 (0:00:00.358) 0:01:37.989 **** 2025-02-19 09:00:43.675661 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.675669 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.675678 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.675686 | orchestrator | 2025-02-19 09:00:43.675695 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-02-19 09:00:43.675703 | orchestrator | Wednesday 19 February 2025 08:53:48 +0000 (0:00:01.248) 0:01:39.237 **** 2025-02-19 09:00:43.675712 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.675720 | orchestrator | 2025-02-19 09:00:43.675729 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-02-19 09:00:43.675738 | orchestrator | Wednesday 19 February 2025 08:53:50 +0000 (0:00:01.037) 0:01:40.275 **** 2025-02-19 09:00:43.675756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.675767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.675805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.675836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675860 | orchestrator | 2025-02-19 09:00:43.675868 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-02-19 09:00:43.675877 | orchestrator | Wednesday 19 February 2025 08:53:57 +0000 (0:00:07.125) 0:01:47.400 **** 2025-02-19 09:00:43.675887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.675900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675918 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.675927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.675961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.675980 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.676011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.676027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.676036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.676051 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.676060 | orchestrator | 2025-02-19 09:00:43.676068 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-02-19 09:00:43.676077 | orchestrator | Wednesday 19 February 2025 08:53:58 +0000 (0:00:01.636) 0:01:49.037 **** 2025-02-19 09:00:43.676086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-19 09:00:43.676095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-19 09:00:43.676105 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.676114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-19 09:00:43.676204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-19 09:00:43.676214 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.676223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-19 09:00:43.676232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-19 09:00:43.676241 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.676249 | orchestrator | 2025-02-19 09:00:43.676258 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-02-19 09:00:43.676267 | orchestrator | Wednesday 19 February 2025 08:54:00 +0000 (0:00:02.040) 0:01:51.077 **** 2025-02-19 09:00:43.676275 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.676284 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.676293 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.676301 | orchestrator | 2025-02-19 09:00:43.676310 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-02-19 09:00:43.676318 | orchestrator | Wednesday 19 February 2025 08:54:01 +0000 (0:00:00.521) 0:01:51.599 **** 2025-02-19 09:00:43.676327 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.676336 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.676344 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.676353 | orchestrator | 2025-02-19 09:00:43.676368 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-02-19 09:00:43.676382 | orchestrator | Wednesday 19 February 2025 08:54:03 +0000 (0:00:02.090) 0:01:53.690 **** 2025-02-19 09:00:43.676396 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.676410 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.676424 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.676438 | orchestrator | 2025-02-19 09:00:43.676452 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-02-19 09:00:43.676461 | orchestrator | Wednesday 19 February 2025 08:54:04 +0000 (0:00:00.917) 0:01:54.607 **** 2025-02-19 09:00:43.676479 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.676487 | orchestrator | 2025-02-19 09:00:43.676496 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-02-19 09:00:43.676510 | orchestrator | Wednesday 19 February 2025 08:54:05 +0000 (0:00:00.806) 0:01:55.414 **** 2025-02-19 09:00:43.676531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-19 09:00:43.676542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-19 09:00:43.676551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-19 09:00:43.676560 | orchestrator | 2025-02-19 09:00:43.676568 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-02-19 09:00:43.676580 | orchestrator | Wednesday 19 February 2025 08:54:10 +0000 (0:00:05.268) 0:02:00.682 **** 2025-02-19 09:00:43.676589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-19 09:00:43.676602 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.676622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-19 09:00:43.676632 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.676641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-19 09:00:43.676650 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.676658 | orchestrator | 2025-02-19 09:00:43.676667 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-02-19 09:00:43.676676 | orchestrator | Wednesday 19 February 2025 08:54:14 +0000 (0:00:04.108) 0:02:04.791 **** 2025-02-19 09:00:43.676685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-19 09:00:43.676694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-19 09:00:43.676703 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.676712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-19 09:00:43.676721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-19 09:00:43.676730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-19 09:00:43.676743 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.676752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-19 09:00:43.676774 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.676784 | orchestrator | 2025-02-19 09:00:43.676796 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-02-19 09:00:43.676805 | orchestrator | Wednesday 19 February 2025 08:54:18 +0000 (0:00:03.611) 0:02:08.403 **** 2025-02-19 09:00:43.676813 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.676822 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.676831 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.676839 | orchestrator | 2025-02-19 09:00:43.676848 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-02-19 09:00:43.676857 | orchestrator | Wednesday 19 February 2025 08:54:18 +0000 (0:00:00.627) 0:02:09.030 **** 2025-02-19 09:00:43.676865 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.676874 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.676882 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.676891 | orchestrator | 2025-02-19 09:00:43.676899 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-02-19 09:00:43.676908 | orchestrator | Wednesday 19 February 2025 08:54:20 +0000 (0:00:01.556) 0:02:10.587 **** 2025-02-19 09:00:43.676972 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.676981 | orchestrator | 2025-02-19 09:00:43.677039 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-02-19 09:00:43.677050 | orchestrator | Wednesday 19 February 2025 08:54:21 +0000 (0:00:00.985) 0:02:11.572 **** 2025-02-19 09:00:43.677059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.677069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.677126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.677175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677207 | orchestrator | 2025-02-19 09:00:43.677215 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-02-19 09:00:43.677223 | orchestrator | Wednesday 19 February 2025 08:54:27 +0000 (0:00:06.009) 0:02:17.582 **** 2025-02-19 09:00:43.677231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.677244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677273 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.677281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.677290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677325 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.677337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.677346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677384 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.677392 | orchestrator | 2025-02-19 09:00:43.677401 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-02-19 09:00:43.677419 | orchestrator | Wednesday 19 February 2025 08:54:28 +0000 (0:00:01.384) 0:02:18.966 **** 2025-02-19 09:00:43.677428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-19 09:00:43.677436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-19 09:00:43.677445 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.677453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-19 09:00:43.677461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-19 09:00:43.677469 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.677481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-19 09:00:43.677489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-19 09:00:43.677498 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.677506 | orchestrator | 2025-02-19 09:00:43.677569 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-02-19 09:00:43.677579 | orchestrator | Wednesday 19 February 2025 08:54:30 +0000 (0:00:01.401) 0:02:20.368 **** 2025-02-19 09:00:43.677587 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.677595 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.677607 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.677615 | orchestrator | 2025-02-19 09:00:43.677623 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-02-19 09:00:43.677631 | orchestrator | Wednesday 19 February 2025 08:54:30 +0000 (0:00:00.582) 0:02:20.951 **** 2025-02-19 09:00:43.677639 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.677648 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.677660 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.677669 | orchestrator | 2025-02-19 09:00:43.677676 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-02-19 09:00:43.677684 | orchestrator | Wednesday 19 February 2025 08:54:32 +0000 (0:00:01.836) 0:02:22.787 **** 2025-02-19 09:00:43.677692 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.677700 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.677708 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.677716 | orchestrator | 2025-02-19 09:00:43.677725 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-02-19 09:00:43.677733 | orchestrator | Wednesday 19 February 2025 08:54:32 +0000 (0:00:00.396) 0:02:23.183 **** 2025-02-19 09:00:43.677741 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.677748 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.677756 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.677764 | orchestrator | 2025-02-19 09:00:43.677772 | orchestrator | TASK [include_role : designate] ************************************************ 2025-02-19 09:00:43.677780 | orchestrator | Wednesday 19 February 2025 08:54:34 +0000 (0:00:01.204) 0:02:24.387 **** 2025-02-19 09:00:43.677788 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.677796 | orchestrator | 2025-02-19 09:00:43.677804 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-02-19 09:00:43.677812 | orchestrator | Wednesday 19 February 2025 08:54:35 +0000 (0:00:01.619) 0:02:26.007 **** 2025-02-19 09:00:43.677820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:00:43.677829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:00:43.677849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:00:43.677858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:00:43.677879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.677970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:00:43.678002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:00:43.678041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678085 | orchestrator | 2025-02-19 09:00:43.678093 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-02-19 09:00:43.678101 | orchestrator | Wednesday 19 February 2025 08:54:43 +0000 (0:00:07.864) 0:02:33.871 **** 2025-02-19 09:00:43.678117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:00:43.678137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:00:43.678145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678203 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.678221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:00:43.678230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:00:43.678238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678291 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.678304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:00:43.678313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:00:43.678321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.678388 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.678401 | orchestrator | 2025-02-19 09:00:43.678414 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-02-19 09:00:43.678427 | orchestrator | Wednesday 19 February 2025 08:54:45 +0000 (0:00:02.035) 0:02:35.906 **** 2025-02-19 09:00:43.678440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-19 09:00:43.678451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-19 09:00:43.678464 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.678477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-19 09:00:43.678491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-19 09:00:43.678503 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.678517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-19 09:00:43.678530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-19 09:00:43.678543 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.678556 | orchestrator | 2025-02-19 09:00:43.678569 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-02-19 09:00:43.678588 | orchestrator | Wednesday 19 February 2025 08:54:47 +0000 (0:00:01.771) 0:02:37.678 **** 2025-02-19 09:00:43.678602 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.678614 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.678628 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.678640 | orchestrator | 2025-02-19 09:00:43.678654 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-02-19 09:00:43.678666 | orchestrator | Wednesday 19 February 2025 08:54:47 +0000 (0:00:00.253) 0:02:37.932 **** 2025-02-19 09:00:43.678680 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.678693 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.678706 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.678720 | orchestrator | 2025-02-19 09:00:43.678728 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-02-19 09:00:43.678744 | orchestrator | Wednesday 19 February 2025 08:54:48 +0000 (0:00:01.285) 0:02:39.217 **** 2025-02-19 09:00:43.678758 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.678771 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.678784 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.678797 | orchestrator | 2025-02-19 09:00:43.678810 | orchestrator | TASK [include_role : glance] *************************************************** 2025-02-19 09:00:43.678823 | orchestrator | Wednesday 19 February 2025 08:54:49 +0000 (0:00:00.469) 0:02:39.687 **** 2025-02-19 09:00:43.678837 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.678852 | orchestrator | 2025-02-19 09:00:43.678865 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-02-19 09:00:43.678878 | orchestrator | Wednesday 19 February 2025 08:54:50 +0000 (0:00:00.972) 0:02:40.659 **** 2025-02-19 09:00:43.678910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-19 09:00:43.678927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.678963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-19 09:00:43.679030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.679056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-19 09:00:43.679089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.679122 | orchestrator | 2025-02-19 09:00:43.679136 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-02-19 09:00:43.679150 | orchestrator | Wednesday 19 February 2025 08:54:55 +0000 (0:00:05.187) 0:02:45.847 **** 2025-02-19 09:00:43.679163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-19 09:00:43.679191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.679212 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.679224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-19 09:00:43.679248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.679267 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.679279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-19 09:00:43.679306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.679314 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.679321 | orchestrator | 2025-02-19 09:00:43.679328 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-02-19 09:00:43.679335 | orchestrator | Wednesday 19 February 2025 08:54:59 +0000 (0:00:03.917) 0:02:49.764 **** 2025-02-19 09:00:43.679342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-19 09:00:43.679350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-19 09:00:43.679361 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.679368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-19 09:00:43.679376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-19 09:00:43.679383 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.679391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-19 09:00:43.679398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-19 09:00:43.679405 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.679412 | orchestrator | 2025-02-19 09:00:43.679419 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-02-19 09:00:43.679426 | orchestrator | Wednesday 19 February 2025 08:55:11 +0000 (0:00:12.307) 0:03:02.072 **** 2025-02-19 09:00:43.679433 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.679440 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.679447 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.679454 | orchestrator | 2025-02-19 09:00:43.679461 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-02-19 09:00:43.679467 | orchestrator | Wednesday 19 February 2025 08:55:12 +0000 (0:00:00.856) 0:03:02.928 **** 2025-02-19 09:00:43.679474 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.679485 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.679492 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.679499 | orchestrator | 2025-02-19 09:00:43.679506 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-02-19 09:00:43.679513 | orchestrator | Wednesday 19 February 2025 08:55:14 +0000 (0:00:01.598) 0:03:04.527 **** 2025-02-19 09:00:43.679520 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.679527 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.679534 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.679540 | orchestrator | 2025-02-19 09:00:43.679547 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-02-19 09:00:43.679554 | orchestrator | Wednesday 19 February 2025 08:55:14 +0000 (0:00:00.434) 0:03:04.961 **** 2025-02-19 09:00:43.679561 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.679572 | orchestrator | 2025-02-19 09:00:43.679579 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-02-19 09:00:43.679586 | orchestrator | Wednesday 19 February 2025 08:55:15 +0000 (0:00:01.037) 0:03:05.999 **** 2025-02-19 09:00:43.679646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:00:43.679655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:00:43.679663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:00:43.679670 | orchestrator | 2025-02-19 09:00:43.679677 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-02-19 09:00:43.679684 | orchestrator | Wednesday 19 February 2025 08:55:22 +0000 (0:00:07.064) 0:03:13.064 **** 2025-02-19 09:00:43.679691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-19 09:00:43.679698 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.679709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-19 09:00:43.679721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-19 09:00:43.679728 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.679735 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.679742 | orchestrator | 2025-02-19 09:00:43.679749 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-02-19 09:00:43.679756 | orchestrator | Wednesday 19 February 2025 08:55:23 +0000 (0:00:01.040) 0:03:14.104 **** 2025-02-19 09:00:43.679763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-19 09:00:43.679773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-19 09:00:43.679781 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.679788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-19 09:00:43.679795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-19 09:00:43.679802 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.679809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-19 09:00:43.679817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-19 09:00:43.679824 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.679831 | orchestrator | 2025-02-19 09:00:43.679838 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-02-19 09:00:43.679845 | orchestrator | Wednesday 19 February 2025 08:55:25 +0000 (0:00:01.170) 0:03:15.275 **** 2025-02-19 09:00:43.679852 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.679859 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.679866 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.679873 | orchestrator | 2025-02-19 09:00:43.679880 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-02-19 09:00:43.679887 | orchestrator | Wednesday 19 February 2025 08:55:25 +0000 (0:00:00.840) 0:03:16.115 **** 2025-02-19 09:00:43.679894 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.679901 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.679911 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.679918 | orchestrator | 2025-02-19 09:00:43.679928 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-02-19 09:00:43.679935 | orchestrator | Wednesday 19 February 2025 08:55:28 +0000 (0:00:02.724) 0:03:18.840 **** 2025-02-19 09:00:43.679942 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.679949 | orchestrator | 2025-02-19 09:00:43.679956 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-02-19 09:00:43.679966 | orchestrator | Wednesday 19 February 2025 08:55:30 +0000 (0:00:01.751) 0:03:20.592 **** 2025-02-19 09:00:43.679985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.680010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.680018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.680025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.680033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.680047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.680055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.680062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.680069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.680077 | orchestrator | 2025-02-19 09:00:43.680084 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-02-19 09:00:43.680091 | orchestrator | Wednesday 19 February 2025 08:55:45 +0000 (0:00:15.171) 0:03:35.763 **** 2025-02-19 09:00:43.680098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.680109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.680119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.680127 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.680134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.680141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.680148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.680159 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.680167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.680178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.680186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.680193 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.680200 | orchestrator | 2025-02-19 09:00:43.680207 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-02-19 09:00:43.680214 | orchestrator | Wednesday 19 February 2025 08:55:47 +0000 (0:00:02.182) 0:03:37.946 **** 2025-02-19 09:00:43.680221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680252 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.680259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680295 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.680302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-19 09:00:43.680332 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.680339 | orchestrator | 2025-02-19 09:00:43.680346 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-02-19 09:00:43.680353 | orchestrator | Wednesday 19 February 2025 08:55:49 +0000 (0:00:02.228) 0:03:40.174 **** 2025-02-19 09:00:43.680360 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.680367 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.680374 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.680381 | orchestrator | 2025-02-19 09:00:43.680388 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-02-19 09:00:43.680394 | orchestrator | Wednesday 19 February 2025 08:55:50 +0000 (0:00:00.718) 0:03:40.893 **** 2025-02-19 09:00:43.680401 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.680408 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.680415 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.680422 | orchestrator | 2025-02-19 09:00:43.680429 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-02-19 09:00:43.680437 | orchestrator | Wednesday 19 February 2025 08:55:52 +0000 (0:00:02.070) 0:03:42.964 **** 2025-02-19 09:00:43.680448 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.680460 | orchestrator | 2025-02-19 09:00:43.680467 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-02-19 09:00:43.680474 | orchestrator | Wednesday 19 February 2025 08:55:54 +0000 (0:00:01.317) 0:03:44.282 **** 2025-02-19 09:00:43.680482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:00:43.680499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:00:43.680508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:00:43.680519 | orchestrator | 2025-02-19 09:00:43.680526 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-02-19 09:00:43.680533 | orchestrator | Wednesday 19 February 2025 08:56:01 +0000 (0:00:07.393) 0:03:51.675 **** 2025-02-19 09:00:43.680545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-19 09:00:43.680557 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.680564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-19 09:00:43.680572 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.680583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-19 09:00:43.680598 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.680605 | orchestrator | 2025-02-19 09:00:43.680665 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-02-19 09:00:43.680673 | orchestrator | Wednesday 19 February 2025 08:56:02 +0000 (0:00:00.980) 0:03:52.656 **** 2025-02-19 09:00:43.680680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-19 09:00:43.680688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-19 09:00:43.680696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-19 09:00:43.680705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-19 09:00:43.680712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-19 09:00:43.680720 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.680730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-19 09:00:43.680742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-19 09:00:43.680749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-19 09:00:43.680757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-19 09:00:43.680769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-19 09:00:43.680776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-19 09:00:43.680783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-19 09:00:43.680802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-19 09:00:43.680809 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.680824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-19 09:00:43.680832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-19 09:00:43.680839 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.680846 | orchestrator | 2025-02-19 09:00:43.680853 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-02-19 09:00:43.680860 | orchestrator | Wednesday 19 February 2025 08:56:04 +0000 (0:00:02.360) 0:03:55.017 **** 2025-02-19 09:00:43.680867 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.680874 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.680881 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.680888 | orchestrator | 2025-02-19 09:00:43.680895 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-02-19 09:00:43.680902 | orchestrator | Wednesday 19 February 2025 08:56:05 +0000 (0:00:00.645) 0:03:55.662 **** 2025-02-19 09:00:43.680909 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.680916 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.680923 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.680930 | orchestrator | 2025-02-19 09:00:43.680937 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-02-19 09:00:43.680944 | orchestrator | Wednesday 19 February 2025 08:56:07 +0000 (0:00:01.669) 0:03:57.332 **** 2025-02-19 09:00:43.680951 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.680958 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.680965 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.680972 | orchestrator | 2025-02-19 09:00:43.680979 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-02-19 09:00:43.680986 | orchestrator | Wednesday 19 February 2025 08:56:07 +0000 (0:00:00.319) 0:03:57.651 **** 2025-02-19 09:00:43.681006 | orchestrator | included: ironic for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.681013 | orchestrator | 2025-02-19 09:00:43.681020 | orchestrator | TASK [haproxy-config : Copying over ironic haproxy config] ********************* 2025-02-19 09:00:43.681027 | orchestrator | Wednesday 19 February 2025 08:56:09 +0000 (0:00:01.799) 0:03:59.451 **** 2025-02-19 09:00:43.681039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.681052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.681061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.681069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.681076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.681094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.681102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:00:43.681110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:00:43.681117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:00:43.681125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.681132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:00:43.681143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:00:43.681153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:00:43.681161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:00:43.681168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.681175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:00:43.681183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:00:43.681194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:00:43.681204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:00:43.681212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.681219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:00:43.681226 | orchestrator | 2025-02-19 09:00:43.681234 | orchestrator | TASK [haproxy-config : Add configuration for ironic when using single external frontend] *** 2025-02-19 09:00:43.681241 | orchestrator | Wednesday 19 February 2025 08:56:25 +0000 (0:00:16.723) 0:04:16.174 **** 2025-02-19 09:00:43.681248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.681255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.681270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-19 09:00:43.681277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:00:43.681285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:00:43.681301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.681309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.681317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.681331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.681339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-19 09:00:43.681346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.681354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:00:43.681361 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.681373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:00:43.681384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-19 09:00:43.681395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:00:43.681564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:00:43.681580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.681588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:00:43.681605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:00:43.681613 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.681626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.681633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:00:43.681640 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.681647 | orchestrator | 2025-02-19 09:00:43.681655 | orchestrator | TASK [haproxy-config : Configuring firewall for ironic] ************************ 2025-02-19 09:00:43.681662 | orchestrator | Wednesday 19 February 2025 08:56:27 +0000 (0:00:01.759) 0:04:17.934 **** 2025-02-19 09:00:43.681676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-19 09:00:43.681683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-19 09:00:43.681691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-19 09:00:43.681699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-19 09:00:43.681706 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.681713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-19 09:00:43.681720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-19 09:00:43.681730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-19 09:00:43.681737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-19 09:00:43.681744 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.681752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-19 09:00:43.681759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-19 09:00:43.681767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-19 09:00:43.681777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-19 09:00:43.681785 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.681792 | orchestrator | 2025-02-19 09:00:43.681799 | orchestrator | TASK [proxysql-config : Copying over ironic ProxySQL users config] ************* 2025-02-19 09:00:43.681806 | orchestrator | Wednesday 19 February 2025 08:56:29 +0000 (0:00:01.905) 0:04:19.839 **** 2025-02-19 09:00:43.681813 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.681822 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.681832 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.681843 | orchestrator | 2025-02-19 09:00:43.681854 | orchestrator | TASK [proxysql-config : Copying over ironic ProxySQL rules config] ************* 2025-02-19 09:00:43.681865 | orchestrator | Wednesday 19 February 2025 08:56:30 +0000 (0:00:00.440) 0:04:20.280 **** 2025-02-19 09:00:43.681876 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.681885 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.681892 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.681899 | orchestrator | 2025-02-19 09:00:43.681906 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-02-19 09:00:43.681913 | orchestrator | Wednesday 19 February 2025 08:56:31 +0000 (0:00:01.419) 0:04:21.700 **** 2025-02-19 09:00:43.681920 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.681927 | orchestrator | 2025-02-19 09:00:43.681934 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-02-19 09:00:43.681940 | orchestrator | Wednesday 19 February 2025 08:56:32 +0000 (0:00:00.991) 0:04:22.691 **** 2025-02-19 09:00:43.681952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:00:43.681960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:00:43.681968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:00:43.682007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:00:43.682046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:00:43.682060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:00:43.682068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:00:43.682075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:00:43.682093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:00:43.682120 | orchestrator | 2025-02-19 09:00:43.682128 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-02-19 09:00:43.682135 | orchestrator | Wednesday 19 February 2025 08:56:36 +0000 (0:00:03.725) 0:04:26.417 **** 2025-02-19 09:00:43.682142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-19 09:00:43.682150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:00:43.682160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:00:43.682168 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.682175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-19 09:00:43.682193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:00:43.682200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:00:43.682207 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.682215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-19 09:00:43.682231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:00:43.682239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:00:43.682246 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.682253 | orchestrator | 2025-02-19 09:00:43.682264 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-02-19 09:00:43.682271 | orchestrator | Wednesday 19 February 2025 08:56:37 +0000 (0:00:00.982) 0:04:27.399 **** 2025-02-19 09:00:43.682281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-19 09:00:43.682291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-19 09:00:43.682299 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.682306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-19 09:00:43.682313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-19 09:00:43.682320 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.682327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-19 09:00:43.682334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-19 09:00:43.682341 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.682348 | orchestrator | 2025-02-19 09:00:43.682355 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-02-19 09:00:43.682362 | orchestrator | Wednesday 19 February 2025 08:56:38 +0000 (0:00:01.060) 0:04:28.460 **** 2025-02-19 09:00:43.682369 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.682376 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.682383 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.682390 | orchestrator | 2025-02-19 09:00:43.682397 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-02-19 09:00:43.682404 | orchestrator | Wednesday 19 February 2025 08:56:38 +0000 (0:00:00.462) 0:04:28.922 **** 2025-02-19 09:00:43.682411 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.682418 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.682425 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.682437 | orchestrator | 2025-02-19 09:00:43.682444 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-02-19 09:00:43.682451 | orchestrator | Wednesday 19 February 2025 08:56:40 +0000 (0:00:01.335) 0:04:30.258 **** 2025-02-19 09:00:43.682458 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.682466 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.682473 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.682480 | orchestrator | 2025-02-19 09:00:43.682487 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-02-19 09:00:43.682494 | orchestrator | Wednesday 19 February 2025 08:56:40 +0000 (0:00:00.456) 0:04:30.714 **** 2025-02-19 09:00:43.682501 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.682508 | orchestrator | 2025-02-19 09:00:43.682515 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-02-19 09:00:43.682522 | orchestrator | Wednesday 19 February 2025 08:56:41 +0000 (0:00:01.168) 0:04:31.883 **** 2025-02-19 09:00:43.682544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:00:43.682554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.682561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:00:43.682569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.682581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:00:43.682596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.682603 | orchestrator | 2025-02-19 09:00:43.682611 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-02-19 09:00:43.682618 | orchestrator | Wednesday 19 February 2025 08:56:47 +0000 (0:00:05.735) 0:04:37.618 **** 2025-02-19 09:00:43.682625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:00:43.682632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.682639 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.682646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:00:43.682661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:00:43.682673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.682680 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.682687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.682695 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.682702 | orchestrator | 2025-02-19 09:00:43.682709 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-02-19 09:00:43.682715 | orchestrator | Wednesday 19 February 2025 08:56:48 +0000 (0:00:01.371) 0:04:38.989 **** 2025-02-19 09:00:43.682723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-19 09:00:43.682730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-19 09:00:43.682737 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.682744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-19 09:00:43.682751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-19 09:00:43.682758 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.682765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-19 09:00:43.682772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-19 09:00:43.682787 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.682795 | orchestrator | 2025-02-19 09:00:43.682802 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-02-19 09:00:43.682811 | orchestrator | Wednesday 19 February 2025 08:56:50 +0000 (0:00:01.826) 0:04:40.816 **** 2025-02-19 09:00:43.682819 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.682826 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.682833 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.682839 | orchestrator | 2025-02-19 09:00:43.682847 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-02-19 09:00:43.682858 | orchestrator | Wednesday 19 February 2025 08:56:51 +0000 (0:00:00.618) 0:04:41.434 **** 2025-02-19 09:00:43.682868 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.682879 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.682889 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.682900 | orchestrator | 2025-02-19 09:00:43.682911 | orchestrator | TASK [include_role : manila] *************************************************** 2025-02-19 09:00:43.682922 | orchestrator | Wednesday 19 February 2025 08:56:53 +0000 (0:00:01.940) 0:04:43.375 **** 2025-02-19 09:00:43.682934 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.682944 | orchestrator | 2025-02-19 09:00:43.682956 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-02-19 09:00:43.682966 | orchestrator | Wednesday 19 February 2025 08:56:55 +0000 (0:00:02.122) 0:04:45.498 **** 2025-02-19 09:00:43.682983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-19 09:00:43.683049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-19 09:00:43.683103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-19 09:00:43.683167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683210 | orchestrator | 2025-02-19 09:00:43.683220 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-02-19 09:00:43.683232 | orchestrator | Wednesday 19 February 2025 08:57:02 +0000 (0:00:07.364) 0:04:52.863 **** 2025-02-19 09:00:43.683256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-19 09:00:43.683268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683303 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.683314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-19 09:00:43.683333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-19 09:00:43.683344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683397 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.683412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.683422 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.683431 | orchestrator | 2025-02-19 09:00:43.683440 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-02-19 09:00:43.683453 | orchestrator | Wednesday 19 February 2025 08:57:04 +0000 (0:00:01.532) 0:04:54.396 **** 2025-02-19 09:00:43.683463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-19 09:00:43.683473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-19 09:00:43.683484 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.683494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-19 09:00:43.683503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-19 09:00:43.683513 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.683522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-19 09:00:43.683531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-19 09:00:43.683546 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.683557 | orchestrator | 2025-02-19 09:00:43.683567 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-02-19 09:00:43.683578 | orchestrator | Wednesday 19 February 2025 08:57:05 +0000 (0:00:01.608) 0:04:56.005 **** 2025-02-19 09:00:43.683587 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.683598 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.683608 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.683618 | orchestrator | 2025-02-19 09:00:43.683629 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-02-19 09:00:43.683638 | orchestrator | Wednesday 19 February 2025 08:57:06 +0000 (0:00:00.585) 0:04:56.590 **** 2025-02-19 09:00:43.683648 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.683658 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.683669 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.683678 | orchestrator | 2025-02-19 09:00:43.683688 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-02-19 09:00:43.683699 | orchestrator | Wednesday 19 February 2025 08:57:07 +0000 (0:00:01.649) 0:04:58.239 **** 2025-02-19 09:00:43.683706 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.683712 | orchestrator | 2025-02-19 09:00:43.683718 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-02-19 09:00:43.683724 | orchestrator | Wednesday 19 February 2025 08:57:09 +0000 (0:00:01.628) 0:04:59.868 **** 2025-02-19 09:00:43.683731 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-19 09:00:43.683737 | orchestrator | 2025-02-19 09:00:43.683743 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-02-19 09:00:43.683749 | orchestrator | Wednesday 19 February 2025 08:57:13 +0000 (0:00:03.844) 0:05:03.712 **** 2025-02-19 09:00:43.683763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:00:43.683786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:00:43.683803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-19 09:00:43.683815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-19 09:00:43.683838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:00:43.683859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-19 09:00:43.683870 | orchestrator | 2025-02-19 09:00:43.683900 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-02-19 09:00:43.683911 | orchestrator | Wednesday 19 February 2025 08:57:18 +0000 (0:00:04.571) 0:05:08.283 **** 2025-02-19 09:00:43.683922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-19 09:00:43.683947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-19 09:00:43.683965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-19 09:00:43.683975 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.683986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-19 09:00:43.684012 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.684036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-19 09:00:43.684054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-19 09:00:43.684064 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.684074 | orchestrator | 2025-02-19 09:00:43.684083 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-02-19 09:00:43.684093 | orchestrator | Wednesday 19 February 2025 08:57:21 +0000 (0:00:03.567) 0:05:11.851 **** 2025-02-19 09:00:43.684102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-19 09:00:43.684112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-19 09:00:43.684121 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.684132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-19 09:00:43.684143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-19 09:00:43.684153 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.684162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-19 09:00:43.684196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-19 09:00:43.684207 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.684217 | orchestrator | 2025-02-19 09:00:43.684227 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-02-19 09:00:43.684238 | orchestrator | Wednesday 19 February 2025 08:57:26 +0000 (0:00:05.004) 0:05:16.856 **** 2025-02-19 09:00:43.684248 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.684257 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.684268 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.684279 | orchestrator | 2025-02-19 09:00:43.684288 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-02-19 09:00:43.684298 | orchestrator | Wednesday 19 February 2025 08:57:26 +0000 (0:00:00.362) 0:05:17.219 **** 2025-02-19 09:00:43.684308 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.684318 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.684328 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.684338 | orchestrator | 2025-02-19 09:00:43.684348 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-02-19 09:00:43.684358 | orchestrator | Wednesday 19 February 2025 08:57:28 +0000 (0:00:01.843) 0:05:19.062 **** 2025-02-19 09:00:43.684367 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.684376 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.684385 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.684395 | orchestrator | 2025-02-19 09:00:43.684409 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-02-19 09:00:43.684418 | orchestrator | Wednesday 19 February 2025 08:57:29 +0000 (0:00:00.604) 0:05:19.667 **** 2025-02-19 09:00:43.684427 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.684436 | orchestrator | 2025-02-19 09:00:43.684444 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-02-19 09:00:43.684453 | orchestrator | Wednesday 19 February 2025 08:57:31 +0000 (0:00:01.675) 0:05:21.342 **** 2025-02-19 09:00:43.684465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-19 09:00:43.684477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-19 09:00:43.684493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-19 09:00:43.684503 | orchestrator | 2025-02-19 09:00:43.684513 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-02-19 09:00:43.684528 | orchestrator | Wednesday 19 February 2025 08:57:32 +0000 (0:00:01.623) 0:05:22.966 **** 2025-02-19 09:00:43.684539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-19 09:00:43.684549 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.684559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-19 09:00:43.684568 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.684587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-19 09:00:43.684598 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.684608 | orchestrator | 2025-02-19 09:00:43.684617 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-02-19 09:00:43.684626 | orchestrator | Wednesday 19 February 2025 08:57:33 +0000 (0:00:00.799) 0:05:23.766 **** 2025-02-19 09:00:43.684642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-19 09:00:43.684653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-19 09:00:43.684662 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.684671 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.684681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-19 09:00:43.684691 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.684701 | orchestrator | 2025-02-19 09:00:43.684710 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-02-19 09:00:43.684719 | orchestrator | Wednesday 19 February 2025 08:57:34 +0000 (0:00:01.153) 0:05:24.919 **** 2025-02-19 09:00:43.684728 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.684738 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.684752 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.684763 | orchestrator | 2025-02-19 09:00:43.684773 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-02-19 09:00:43.684782 | orchestrator | Wednesday 19 February 2025 08:57:35 +0000 (0:00:00.346) 0:05:25.265 **** 2025-02-19 09:00:43.684792 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.684801 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.684811 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.684820 | orchestrator | 2025-02-19 09:00:43.684836 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-02-19 09:00:43.684976 | orchestrator | Wednesday 19 February 2025 08:57:36 +0000 (0:00:01.672) 0:05:26.938 **** 2025-02-19 09:00:43.685107 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.685127 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.685133 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.685140 | orchestrator | 2025-02-19 09:00:43.685146 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-02-19 09:00:43.685153 | orchestrator | Wednesday 19 February 2025 08:57:37 +0000 (0:00:00.595) 0:05:27.533 **** 2025-02-19 09:00:43.685159 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.685166 | orchestrator | 2025-02-19 09:00:43.685172 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-02-19 09:00:43.685178 | orchestrator | Wednesday 19 February 2025 08:57:39 +0000 (0:00:01.776) 0:05:29.310 **** 2025-02-19 09:00:43.685186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:00:43.685194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:00:43.685305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:00:43.685341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.685349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.685355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:00:43.685435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.685466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:00:43.685574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.685585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.685683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:00:43.685695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.685701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.685718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.685805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.685817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.685841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.685851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.685931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.685961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.685969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.685981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.686071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.686091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.686097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.686105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.686153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.686220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.686230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686241 | orchestrator | 2025-02-19 09:00:43.686252 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-02-19 09:00:43.686263 | orchestrator | Wednesday 19 February 2025 08:57:45 +0000 (0:00:06.007) 0:05:35.317 **** 2025-02-19 09:00:43.686281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:00:43.686334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:00:43.686370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.686448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:00:43.686473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.686485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.686586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.686619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:00:43.686630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.686640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:00:43.686727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.686752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.686775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.686792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.686864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.686901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.686953 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.687033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:00:43.687057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.687067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.687081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.687087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.687136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.687159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.687170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.687232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.687245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:00:43.687266 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.687284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:00:43.687312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:00:43.687334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687342 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.687348 | orchestrator | 2025-02-19 09:00:43.687354 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-02-19 09:00:43.687360 | orchestrator | Wednesday 19 February 2025 08:57:47 +0000 (0:00:02.080) 0:05:37.398 **** 2025-02-19 09:00:43.687366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-19 09:00:43.687373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-19 09:00:43.687379 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.687385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-19 09:00:43.687391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-19 09:00:43.687397 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.687403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-19 09:00:43.687409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-19 09:00:43.687415 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.687421 | orchestrator | 2025-02-19 09:00:43.687427 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-02-19 09:00:43.687442 | orchestrator | Wednesday 19 February 2025 08:57:49 +0000 (0:00:02.685) 0:05:40.083 **** 2025-02-19 09:00:43.687449 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.687455 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.687461 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.687466 | orchestrator | 2025-02-19 09:00:43.687472 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-02-19 09:00:43.687478 | orchestrator | Wednesday 19 February 2025 08:57:50 +0000 (0:00:00.308) 0:05:40.392 **** 2025-02-19 09:00:43.687489 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.687502 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.687508 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.687514 | orchestrator | 2025-02-19 09:00:43.687520 | orchestrator | TASK [include_role : placement] ************************************************ 2025-02-19 09:00:43.687525 | orchestrator | Wednesday 19 February 2025 08:57:51 +0000 (0:00:01.861) 0:05:42.253 **** 2025-02-19 09:00:43.687531 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.687537 | orchestrator | 2025-02-19 09:00:43.687543 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-02-19 09:00:43.687549 | orchestrator | Wednesday 19 February 2025 08:57:53 +0000 (0:00:01.854) 0:05:44.107 **** 2025-02-19 09:00:43.687561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.687586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.687593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.687600 | orchestrator | 2025-02-19 09:00:43.687606 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-02-19 09:00:43.687612 | orchestrator | Wednesday 19 February 2025 08:57:58 +0000 (0:00:04.895) 0:05:49.003 **** 2025-02-19 09:00:43.687618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.687628 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.687639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.687645 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.687664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.687671 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.687677 | orchestrator | 2025-02-19 09:00:43.687683 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-02-19 09:00:43.687689 | orchestrator | Wednesday 19 February 2025 08:57:59 +0000 (0:00:01.056) 0:05:50.060 **** 2025-02-19 09:00:43.687695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-19 09:00:43.687704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-19 09:00:43.687712 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.687718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-19 09:00:43.687724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-19 09:00:43.687734 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.687740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-19 09:00:43.687746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-19 09:00:43.687752 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.687758 | orchestrator | 2025-02-19 09:00:43.687764 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-02-19 09:00:43.687770 | orchestrator | Wednesday 19 February 2025 08:58:00 +0000 (0:00:01.026) 0:05:51.086 **** 2025-02-19 09:00:43.687776 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.687782 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.687788 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.687794 | orchestrator | 2025-02-19 09:00:43.687799 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-02-19 09:00:43.687805 | orchestrator | Wednesday 19 February 2025 08:58:01 +0000 (0:00:00.623) 0:05:51.710 **** 2025-02-19 09:00:43.687811 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.687817 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.687823 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.687829 | orchestrator | 2025-02-19 09:00:43.687835 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-02-19 09:00:43.687840 | orchestrator | Wednesday 19 February 2025 08:58:03 +0000 (0:00:01.778) 0:05:53.489 **** 2025-02-19 09:00:43.687846 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.687852 | orchestrator | 2025-02-19 09:00:43.687858 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-02-19 09:00:43.687864 | orchestrator | Wednesday 19 February 2025 08:58:04 +0000 (0:00:01.598) 0:05:55.087 **** 2025-02-19 09:00:43.687870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.687895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.687919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.687960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.687972 | orchestrator | 2025-02-19 09:00:43.687978 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-02-19 09:00:43.687984 | orchestrator | Wednesday 19 February 2025 08:58:11 +0000 (0:00:06.830) 0:06:01.918 **** 2025-02-19 09:00:43.688008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.688020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.688041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.688054 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.688066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.688073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.688079 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.688110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.688121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.688127 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688133 | orchestrator | 2025-02-19 09:00:43.688139 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-02-19 09:00:43.688145 | orchestrator | Wednesday 19 February 2025 08:58:12 +0000 (0:00:01.136) 0:06:03.054 **** 2025-02-19 09:00:43.688151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688187 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688205 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-19 09:00:43.688238 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688244 | orchestrator | 2025-02-19 09:00:43.688250 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-02-19 09:00:43.688256 | orchestrator | Wednesday 19 February 2025 08:58:14 +0000 (0:00:01.908) 0:06:04.962 **** 2025-02-19 09:00:43.688274 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688281 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688287 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688293 | orchestrator | 2025-02-19 09:00:43.688299 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-02-19 09:00:43.688305 | orchestrator | Wednesday 19 February 2025 08:58:15 +0000 (0:00:00.638) 0:06:05.601 **** 2025-02-19 09:00:43.688310 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688316 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688322 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688328 | orchestrator | 2025-02-19 09:00:43.688334 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-02-19 09:00:43.688340 | orchestrator | Wednesday 19 February 2025 08:58:17 +0000 (0:00:01.704) 0:06:07.305 **** 2025-02-19 09:00:43.688346 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.688352 | orchestrator | 2025-02-19 09:00:43.688357 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-02-19 09:00:43.688363 | orchestrator | Wednesday 19 February 2025 08:58:19 +0000 (0:00:01.967) 0:06:09.273 **** 2025-02-19 09:00:43.688369 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-02-19 09:00:43.688375 | orchestrator | 2025-02-19 09:00:43.688381 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-02-19 09:00:43.688387 | orchestrator | Wednesday 19 February 2025 08:58:20 +0000 (0:00:01.489) 0:06:10.762 **** 2025-02-19 09:00:43.688393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-19 09:00:43.688399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-19 09:00:43.688406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-19 09:00:43.688415 | orchestrator | 2025-02-19 09:00:43.688421 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-02-19 09:00:43.688427 | orchestrator | Wednesday 19 February 2025 08:58:26 +0000 (0:00:06.185) 0:06:16.947 **** 2025-02-19 09:00:43.688433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-19 09:00:43.688439 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-19 09:00:43.688452 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-19 09:00:43.688479 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688485 | orchestrator | 2025-02-19 09:00:43.688490 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-02-19 09:00:43.688496 | orchestrator | Wednesday 19 February 2025 08:58:29 +0000 (0:00:02.688) 0:06:19.635 **** 2025-02-19 09:00:43.688502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-19 09:00:43.688508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-19 09:00:43.688514 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-19 09:00:43.688527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-19 09:00:43.688535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-19 09:00:43.688542 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-19 09:00:43.688557 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688563 | orchestrator | 2025-02-19 09:00:43.688569 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-19 09:00:43.688574 | orchestrator | Wednesday 19 February 2025 08:58:31 +0000 (0:00:02.201) 0:06:21.837 **** 2025-02-19 09:00:43.688580 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688586 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688595 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688601 | orchestrator | 2025-02-19 09:00:43.688607 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-19 09:00:43.688616 | orchestrator | Wednesday 19 February 2025 08:58:32 +0000 (0:00:00.637) 0:06:22.474 **** 2025-02-19 09:00:43.688622 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688628 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688634 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688640 | orchestrator | 2025-02-19 09:00:43.688646 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-02-19 09:00:43.688652 | orchestrator | Wednesday 19 February 2025 08:58:33 +0000 (0:00:01.363) 0:06:23.837 **** 2025-02-19 09:00:43.688658 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item=nova-spicehtml5proxy) 2025-02-19 09:00:43.688664 | orchestrator | 2025-02-19 09:00:43.688670 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-02-19 09:00:43.688676 | orchestrator | Wednesday 19 February 2025 08:58:35 +0000 (0:00:01.572) 0:06:25.410 **** 2025-02-19 09:00:43.688686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-19 09:00:43.688693 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-19 09:00:43.688719 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-19 09:00:43.688731 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688737 | orchestrator | 2025-02-19 09:00:43.688743 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-02-19 09:00:43.688749 | orchestrator | Wednesday 19 February 2025 08:58:37 +0000 (0:00:02.016) 0:06:27.426 **** 2025-02-19 09:00:43.688755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-19 09:00:43.688764 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-19 09:00:43.688777 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-19 09:00:43.688789 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688795 | orchestrator | 2025-02-19 09:00:43.688801 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-02-19 09:00:43.688806 | orchestrator | Wednesday 19 February 2025 08:58:39 +0000 (0:00:02.105) 0:06:29.532 **** 2025-02-19 09:00:43.688812 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688873 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688879 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688885 | orchestrator | 2025-02-19 09:00:43.688891 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-19 09:00:43.688897 | orchestrator | Wednesday 19 February 2025 08:58:41 +0000 (0:00:02.426) 0:06:31.959 **** 2025-02-19 09:00:43.688903 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688909 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688915 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688921 | orchestrator | 2025-02-19 09:00:43.688927 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-19 09:00:43.688933 | orchestrator | Wednesday 19 February 2025 08:58:42 +0000 (0:00:00.622) 0:06:32.581 **** 2025-02-19 09:00:43.688938 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.688944 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.688950 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.688956 | orchestrator | 2025-02-19 09:00:43.688962 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-02-19 09:00:43.688968 | orchestrator | Wednesday 19 February 2025 08:58:43 +0000 (0:00:01.253) 0:06:33.834 **** 2025-02-19 09:00:43.688974 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-02-19 09:00:43.688980 | orchestrator | 2025-02-19 09:00:43.688986 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-02-19 09:00:43.689019 | orchestrator | Wednesday 19 February 2025 08:58:45 +0000 (0:00:01.478) 0:06:35.313 **** 2025-02-19 09:00:43.689043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-19 09:00:43.689054 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.689061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-19 09:00:43.689067 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.689073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-19 09:00:43.689079 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.689085 | orchestrator | 2025-02-19 09:00:43.689091 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-02-19 09:00:43.689097 | orchestrator | Wednesday 19 February 2025 08:58:47 +0000 (0:00:02.021) 0:06:37.335 **** 2025-02-19 09:00:43.689103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-19 09:00:43.689109 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.689115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-19 09:00:43.689121 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.689127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-19 09:00:43.689133 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.689139 | orchestrator | 2025-02-19 09:00:43.689145 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-02-19 09:00:43.689151 | orchestrator | Wednesday 19 February 2025 08:58:49 +0000 (0:00:02.309) 0:06:39.644 **** 2025-02-19 09:00:43.689157 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.689162 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.689172 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.689178 | orchestrator | 2025-02-19 09:00:43.689184 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-19 09:00:43.689189 | orchestrator | Wednesday 19 February 2025 08:58:51 +0000 (0:00:02.329) 0:06:41.973 **** 2025-02-19 09:00:43.689195 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.689201 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.689220 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.689227 | orchestrator | 2025-02-19 09:00:43.689233 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-19 09:00:43.689239 | orchestrator | Wednesday 19 February 2025 08:58:52 +0000 (0:00:00.628) 0:06:42.602 **** 2025-02-19 09:00:43.689244 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.689250 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.689256 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.689262 | orchestrator | 2025-02-19 09:00:43.689268 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-02-19 09:00:43.689274 | orchestrator | Wednesday 19 February 2025 08:58:54 +0000 (0:00:01.808) 0:06:44.411 **** 2025-02-19 09:00:43.689279 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.689285 | orchestrator | 2025-02-19 09:00:43.689291 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-02-19 09:00:43.689297 | orchestrator | Wednesday 19 February 2025 08:58:56 +0000 (0:00:01.988) 0:06:46.400 **** 2025-02-19 09:00:43.689303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.689311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:00:43.689317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.689353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.689360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:00:43.689366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.689388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.689408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:00:43.689415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.689433 | orchestrator | 2025-02-19 09:00:43.689439 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-02-19 09:00:43.689446 | orchestrator | Wednesday 19 February 2025 08:59:01 +0000 (0:00:05.843) 0:06:52.243 **** 2025-02-19 09:00:43.689452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.689463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:00:43.689482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.689502 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.689508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.689517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:00:43.689523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.689555 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.689561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.689567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:00:43.689574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:00:43.689590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:00:43.689596 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.689602 | orchestrator | 2025-02-19 09:00:43.689623 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-02-19 09:00:43.689630 | orchestrator | Wednesday 19 February 2025 08:59:02 +0000 (0:00:00.863) 0:06:53.107 **** 2025-02-19 09:00:43.689636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-19 09:00:43.689642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-19 09:00:43.689648 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.689654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-19 09:00:43.689660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-19 09:00:43.689666 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.689675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-19 09:00:43.689683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-19 09:00:43.689690 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.689696 | orchestrator | 2025-02-19 09:00:43.689702 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-02-19 09:00:43.689707 | orchestrator | Wednesday 19 February 2025 08:59:04 +0000 (0:00:01.788) 0:06:54.895 **** 2025-02-19 09:00:43.689714 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.689723 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.689729 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.689735 | orchestrator | 2025-02-19 09:00:43.689741 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-02-19 09:00:43.689747 | orchestrator | Wednesday 19 February 2025 08:59:04 +0000 (0:00:00.332) 0:06:55.227 **** 2025-02-19 09:00:43.689752 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.689758 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.689764 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.689770 | orchestrator | 2025-02-19 09:00:43.689776 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-02-19 09:00:43.689782 | orchestrator | Wednesday 19 February 2025 08:59:06 +0000 (0:00:01.835) 0:06:57.063 **** 2025-02-19 09:00:43.689787 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.689793 | orchestrator | 2025-02-19 09:00:43.689799 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-02-19 09:00:43.689805 | orchestrator | Wednesday 19 February 2025 08:59:08 +0000 (0:00:02.029) 0:06:59.093 **** 2025-02-19 09:00:43.689812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:00:43.689831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:00:43.689838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:00:43.689845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:00:43.689855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:00:43.689862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:00:43.689868 | orchestrator | 2025-02-19 09:00:43.689887 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-02-19 09:00:43.689894 | orchestrator | Wednesday 19 February 2025 08:59:16 +0000 (0:00:07.217) 0:07:06.311 **** 2025-02-19 09:00:43.689903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-19 09:00:43.689909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-19 09:00:43.689920 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.689926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-19 09:00:43.689932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-19 09:00:43.689939 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.689958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-19 09:00:43.689965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-19 09:00:43.689975 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.689981 | orchestrator | 2025-02-19 09:00:43.690005 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-02-19 09:00:43.690035 | orchestrator | Wednesday 19 February 2025 08:59:17 +0000 (0:00:01.012) 0:07:07.324 **** 2025-02-19 09:00:43.690046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-19 09:00:43.690053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-19 09:00:43.690060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-19 09:00:43.690067 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.690073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-19 09:00:43.690079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-19 09:00:43.690085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-19 09:00:43.690091 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.690100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-19 09:00:43.690107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-19 09:00:43.690129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-19 09:00:43.690137 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.690143 | orchestrator | 2025-02-19 09:00:43.690149 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-02-19 09:00:43.690155 | orchestrator | Wednesday 19 February 2025 08:59:18 +0000 (0:00:01.559) 0:07:08.884 **** 2025-02-19 09:00:43.690166 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.690172 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.690178 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.690184 | orchestrator | 2025-02-19 09:00:43.690190 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-02-19 09:00:43.690196 | orchestrator | Wednesday 19 February 2025 08:59:19 +0000 (0:00:00.586) 0:07:09.470 **** 2025-02-19 09:00:43.690202 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.690208 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.690214 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.690220 | orchestrator | 2025-02-19 09:00:43.690226 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-02-19 09:00:43.690232 | orchestrator | Wednesday 19 February 2025 08:59:20 +0000 (0:00:01.459) 0:07:10.930 **** 2025-02-19 09:00:43.690238 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.690243 | orchestrator | 2025-02-19 09:00:43.690249 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-02-19 09:00:43.690255 | orchestrator | Wednesday 19 February 2025 08:59:22 +0000 (0:00:02.027) 0:07:12.958 **** 2025-02-19 09:00:43.690261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-19 09:00:43.690268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:00:43.690274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-19 09:00:43.690320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:00:43.690326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-19 09:00:43.690332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:00:43.690351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-19 09:00:43.690400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:00:43.690406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-19 09:00:43.690456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:00:43.690463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-19 09:00:43.690501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:00:43.690507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690539 | orchestrator | 2025-02-19 09:00:43.690545 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-02-19 09:00:43.690551 | orchestrator | Wednesday 19 February 2025 08:59:28 +0000 (0:00:05.741) 0:07:18.699 **** 2025-02-19 09:00:43.690557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:00:43.690563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:00:43.690570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:00:43.690579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:00:43.690595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:00:43.690639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:00:43.690648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:00:43.690660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:00:43.690676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:00:43.690683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:00:43.690709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690737 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.690747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:00:43.690764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:00:43.690776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690803 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.690809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:00:43.690824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:00:43.690830 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.690836 | orchestrator | 2025-02-19 09:00:43.690841 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-02-19 09:00:43.690847 | orchestrator | Wednesday 19 February 2025 08:59:29 +0000 (0:00:01.387) 0:07:20.086 **** 2025-02-19 09:00:43.690853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-02-19 09:00:43.690860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-02-19 09:00:43.690866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-19 09:00:43.690879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-19 09:00:43.690885 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.690892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-02-19 09:00:43.690898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-02-19 09:00:43.690904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-19 09:00:43.690910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-19 09:00:43.690916 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.690922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-02-19 09:00:43.690928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-02-19 09:00:43.690934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-19 09:00:43.690943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-02-19 09:00:43.690949 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.690955 | orchestrator | 2025-02-19 09:00:43.690961 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-02-19 09:00:43.690967 | orchestrator | Wednesday 19 February 2025 08:59:31 +0000 (0:00:01.929) 0:07:22.016 **** 2025-02-19 09:00:43.690972 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.690978 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.690984 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691009 | orchestrator | 2025-02-19 09:00:43.691019 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-02-19 09:00:43.691030 | orchestrator | Wednesday 19 February 2025 08:59:32 +0000 (0:00:00.710) 0:07:22.726 **** 2025-02-19 09:00:43.691040 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691050 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691056 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691062 | orchestrator | 2025-02-19 09:00:43.691068 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-02-19 09:00:43.691073 | orchestrator | Wednesday 19 February 2025 08:59:34 +0000 (0:00:02.023) 0:07:24.750 **** 2025-02-19 09:00:43.691079 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.691090 | orchestrator | 2025-02-19 09:00:43.691098 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-02-19 09:00:43.691104 | orchestrator | Wednesday 19 February 2025 08:59:36 +0000 (0:00:02.229) 0:07:26.979 **** 2025-02-19 09:00:43.691110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 09:00:43.691124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 09:00:43.691131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-19 09:00:43.691137 | orchestrator | 2025-02-19 09:00:43.691143 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-02-19 09:00:43.691151 | orchestrator | Wednesday 19 February 2025 08:59:39 +0000 (0:00:02.721) 0:07:29.701 **** 2025-02-19 09:00:43.691158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-02-19 09:00:43.691168 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-02-19 09:00:43.691184 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-02-19 09:00:43.691197 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691203 | orchestrator | 2025-02-19 09:00:43.691209 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-02-19 09:00:43.691215 | orchestrator | Wednesday 19 February 2025 08:59:40 +0000 (0:00:00.807) 0:07:30.509 **** 2025-02-19 09:00:43.691221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-02-19 09:00:43.691227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-02-19 09:00:43.691233 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691239 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-02-19 09:00:43.691251 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691257 | orchestrator | 2025-02-19 09:00:43.691263 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-02-19 09:00:43.691269 | orchestrator | Wednesday 19 February 2025 08:59:41 +0000 (0:00:01.041) 0:07:31.550 **** 2025-02-19 09:00:43.691275 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691280 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691292 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691298 | orchestrator | 2025-02-19 09:00:43.691304 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-02-19 09:00:43.691310 | orchestrator | Wednesday 19 February 2025 08:59:41 +0000 (0:00:00.686) 0:07:32.236 **** 2025-02-19 09:00:43.691316 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691324 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691330 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691336 | orchestrator | 2025-02-19 09:00:43.691342 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-02-19 09:00:43.691348 | orchestrator | Wednesday 19 February 2025 08:59:43 +0000 (0:00:01.753) 0:07:33.990 **** 2025-02-19 09:00:43.691354 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:00:43.691360 | orchestrator | 2025-02-19 09:00:43.691366 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-02-19 09:00:43.691372 | orchestrator | Wednesday 19 February 2025 08:59:46 +0000 (0:00:02.397) 0:07:36.388 **** 2025-02-19 09:00:43.691378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.691385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.691391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.691404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.691414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.691420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-02-19 09:00:43.691426 | orchestrator | 2025-02-19 09:00:43.691432 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-02-19 09:00:43.691438 | orchestrator | Wednesday 19 February 2025 08:59:55 +0000 (0:00:09.200) 0:07:45.588 **** 2025-02-19 09:00:43.691448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.691457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.691467 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.691479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.691485 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.691503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-02-19 09:00:43.691512 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691518 | orchestrator | 2025-02-19 09:00:43.691524 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-02-19 09:00:43.691530 | orchestrator | Wednesday 19 February 2025 08:59:56 +0000 (0:00:01.234) 0:07:46.822 **** 2025-02-19 09:00:43.691538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691563 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691593 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-02-19 09:00:43.691626 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691632 | orchestrator | 2025-02-19 09:00:43.691638 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-02-19 09:00:43.691644 | orchestrator | Wednesday 19 February 2025 08:59:58 +0000 (0:00:02.112) 0:07:48.935 **** 2025-02-19 09:00:43.691650 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691656 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691662 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691668 | orchestrator | 2025-02-19 09:00:43.691674 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-02-19 09:00:43.691680 | orchestrator | Wednesday 19 February 2025 08:59:59 +0000 (0:00:00.677) 0:07:49.612 **** 2025-02-19 09:00:43.691686 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691692 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691698 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691704 | orchestrator | 2025-02-19 09:00:43.691710 | orchestrator | TASK [include_role : swift] **************************************************** 2025-02-19 09:00:43.691716 | orchestrator | Wednesday 19 February 2025 09:00:00 +0000 (0:00:01.577) 0:07:51.190 **** 2025-02-19 09:00:43.691722 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691727 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691733 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691739 | orchestrator | 2025-02-19 09:00:43.691745 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-02-19 09:00:43.691751 | orchestrator | Wednesday 19 February 2025 09:00:01 +0000 (0:00:00.601) 0:07:51.791 **** 2025-02-19 09:00:43.691757 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691763 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691769 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691775 | orchestrator | 2025-02-19 09:00:43.691781 | orchestrator | TASK [include_role : trove] **************************************************** 2025-02-19 09:00:43.691787 | orchestrator | Wednesday 19 February 2025 09:00:02 +0000 (0:00:00.522) 0:07:52.314 **** 2025-02-19 09:00:43.691793 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691801 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691807 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691817 | orchestrator | 2025-02-19 09:00:43.691824 | orchestrator | TASK [include_role : venus] **************************************************** 2025-02-19 09:00:43.691830 | orchestrator | Wednesday 19 February 2025 09:00:02 +0000 (0:00:00.703) 0:07:53.018 **** 2025-02-19 09:00:43.691836 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691842 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691848 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691854 | orchestrator | 2025-02-19 09:00:43.691860 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-02-19 09:00:43.691869 | orchestrator | Wednesday 19 February 2025 09:00:03 +0000 (0:00:00.329) 0:07:53.348 **** 2025-02-19 09:00:43.691875 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691881 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691887 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691893 | orchestrator | 2025-02-19 09:00:43.691899 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-02-19 09:00:43.691905 | orchestrator | Wednesday 19 February 2025 09:00:03 +0000 (0:00:00.502) 0:07:53.850 **** 2025-02-19 09:00:43.691910 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.691916 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.691922 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.691928 | orchestrator | 2025-02-19 09:00:43.691934 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-02-19 09:00:43.691940 | orchestrator | Wednesday 19 February 2025 09:00:04 +0000 (0:00:00.867) 0:07:54.718 **** 2025-02-19 09:00:43.691946 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:00:43.691955 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:00:43.691961 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:00:43.691966 | orchestrator | 2025-02-19 09:00:43.691972 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-02-19 09:00:43.691978 | orchestrator | Wednesday 19 February 2025 09:00:05 +0000 (0:00:01.079) 0:07:55.797 **** 2025-02-19 09:00:43.691984 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:00:43.692008 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:00:43.692015 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:00:43.692021 | orchestrator | 2025-02-19 09:00:43.692026 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-02-19 09:00:43.692032 | orchestrator | Wednesday 19 February 2025 09:00:06 +0000 (0:00:00.692) 0:07:56.490 **** 2025-02-19 09:00:43.692038 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:00:43.692044 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:00:43.692050 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:00:43.692056 | orchestrator | 2025-02-19 09:00:43.692061 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-02-19 09:00:43.692067 | orchestrator | Wednesday 19 February 2025 09:00:07 +0000 (0:00:01.422) 0:07:57.912 **** 2025-02-19 09:00:43.692073 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:00:43.692079 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:00:43.692085 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:00:43.692091 | orchestrator | 2025-02-19 09:00:43.692096 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-02-19 09:00:43.692102 | orchestrator | Wednesday 19 February 2025 09:00:08 +0000 (0:00:01.120) 0:07:59.032 **** 2025-02-19 09:00:43.692108 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:00:43.692114 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:00:43.692120 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:00:43.692126 | orchestrator | 2025-02-19 09:00:43.692132 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-02-19 09:00:43.692138 | orchestrator | Wednesday 19 February 2025 09:00:10 +0000 (0:00:01.489) 0:08:00.522 **** 2025-02-19 09:00:43.692144 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:00:43.692150 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:00:43.692156 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:00:43.692162 | orchestrator | 2025-02-19 09:00:43.692168 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-02-19 09:00:43.692173 | orchestrator | Wednesday 19 February 2025 09:00:21 +0000 (0:00:11.045) 0:08:11.567 **** 2025-02-19 09:00:43.692179 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:00:43.692185 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:00:43.692191 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:00:43.692197 | orchestrator | 2025-02-19 09:00:43.692203 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-02-19 09:00:43.692209 | orchestrator | Wednesday 19 February 2025 09:00:22 +0000 (0:00:00.710) 0:08:12.277 **** 2025-02-19 09:00:43.692215 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.692221 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.692227 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.692233 | orchestrator | 2025-02-19 09:00:43.692239 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-02-19 09:00:43.692245 | orchestrator | Wednesday 19 February 2025 09:00:23 +0000 (0:00:01.154) 0:08:13.432 **** 2025-02-19 09:00:43.692250 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:00:43.692256 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:00:43.692262 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:00:43.692268 | orchestrator | 2025-02-19 09:00:43.692274 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-02-19 09:00:43.692280 | orchestrator | Wednesday 19 February 2025 09:00:29 +0000 (0:00:06.019) 0:08:19.452 **** 2025-02-19 09:00:43.692286 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.692291 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.692297 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.692308 | orchestrator | 2025-02-19 09:00:43.692314 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-02-19 09:00:43.692320 | orchestrator | Wednesday 19 February 2025 09:00:29 +0000 (0:00:00.720) 0:08:20.172 **** 2025-02-19 09:00:43.692326 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.692332 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.692338 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.692344 | orchestrator | 2025-02-19 09:00:43.692350 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-02-19 09:00:43.692356 | orchestrator | Wednesday 19 February 2025 09:00:30 +0000 (0:00:00.389) 0:08:20.562 **** 2025-02-19 09:00:43.692362 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.692367 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.692373 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.692379 | orchestrator | 2025-02-19 09:00:43.692392 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-02-19 09:00:43.692398 | orchestrator | Wednesday 19 February 2025 09:00:30 +0000 (0:00:00.679) 0:08:21.241 **** 2025-02-19 09:00:43.692404 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.692410 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.692416 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.692429 | orchestrator | 2025-02-19 09:00:43.692435 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-02-19 09:00:43.692441 | orchestrator | Wednesday 19 February 2025 09:00:31 +0000 (0:00:00.686) 0:08:21.927 **** 2025-02-19 09:00:43.692447 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.692453 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.692462 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.692468 | orchestrator | 2025-02-19 09:00:43.692474 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-02-19 09:00:43.692483 | orchestrator | Wednesday 19 February 2025 09:00:32 +0000 (0:00:00.400) 0:08:22.328 **** 2025-02-19 09:00:43.692489 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.692495 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.692501 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.692506 | orchestrator | 2025-02-19 09:00:43.692512 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-02-19 09:00:43.692518 | orchestrator | Wednesday 19 February 2025 09:00:32 +0000 (0:00:00.705) 0:08:23.034 **** 2025-02-19 09:00:43.692524 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:00:43.692530 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:00:43.692536 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:00:43.692541 | orchestrator | 2025-02-19 09:00:43.692548 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-02-19 09:00:43.692553 | orchestrator | Wednesday 19 February 2025 09:00:38 +0000 (0:00:05.634) 0:08:28.669 **** 2025-02-19 09:00:43.692559 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:00:43.692565 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:00:43.692571 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:00:43.692577 | orchestrator | 2025-02-19 09:00:43.692582 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:00:43.692588 | orchestrator | testbed-node-0 : ok=85  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-02-19 09:00:43.692594 | orchestrator | testbed-node-1 : ok=84  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-02-19 09:00:43.692600 | orchestrator | testbed-node-2 : ok=84  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-02-19 09:00:43.692606 | orchestrator | 2025-02-19 09:00:43.692612 | orchestrator | 2025-02-19 09:00:43.692618 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:00:43.692624 | orchestrator | Wednesday 19 February 2025 09:00:39 +0000 (0:00:01.117) 0:08:29.787 **** 2025-02-19 09:00:43.692633 | orchestrator | =============================================================================== 2025-02-19 09:00:43.692639 | orchestrator | haproxy-config : Copying over ironic haproxy config -------------------- 16.72s 2025-02-19 09:00:43.692645 | orchestrator | haproxy-config : Copying over heat haproxy config ---------------------- 15.17s 2025-02-19 09:00:43.692651 | orchestrator | haproxy-config : Configuring firewall for glance ----------------------- 12.31s 2025-02-19 09:00:43.692657 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 11.05s 2025-02-19 09:00:43.692663 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 9.20s 2025-02-19 09:00:43.692669 | orchestrator | loadbalancer : Removing checks for services which are disabled ---------- 8.24s 2025-02-19 09:00:43.692674 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 7.87s 2025-02-19 09:00:43.692681 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 7.39s 2025-02-19 09:00:43.692686 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 7.36s 2025-02-19 09:00:43.692692 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.22s 2025-02-19 09:00:43.692698 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 7.13s 2025-02-19 09:00:43.692704 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 7.06s 2025-02-19 09:00:43.692710 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.83s 2025-02-19 09:00:43.692716 | orchestrator | loadbalancer : Ensuring keepalived checks subdir exists ----------------- 6.57s 2025-02-19 09:00:43.692722 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.19s 2025-02-19 09:00:43.692727 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 6.02s 2025-02-19 09:00:43.692733 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 6.01s 2025-02-19 09:00:43.692739 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.01s 2025-02-19 09:00:43.692745 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.84s 2025-02-19 09:00:43.692751 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.81s 2025-02-19 09:00:43.692757 | orchestrator | 2025-02-19 09:00:43 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:43.692763 | orchestrator | 2025-02-19 09:00:43 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:43.692771 | orchestrator | 2025-02-19 09:00:43 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:00:46.735118 | orchestrator | 2025-02-19 09:00:43 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:00:46.735256 | orchestrator | 2025-02-19 09:00:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:46.735294 | orchestrator | 2025-02-19 09:00:46 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:46.736259 | orchestrator | 2025-02-19 09:00:46 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:46.736297 | orchestrator | 2025-02-19 09:00:46 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:00:46.738613 | orchestrator | 2025-02-19 09:00:46 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:00:49.784258 | orchestrator | 2025-02-19 09:00:46 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:49.784408 | orchestrator | 2025-02-19 09:00:49 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:49.785739 | orchestrator | 2025-02-19 09:00:49 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:49.788458 | orchestrator | 2025-02-19 09:00:49 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:00:49.789122 | orchestrator | 2025-02-19 09:00:49 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:00:52.831054 | orchestrator | 2025-02-19 09:00:49 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:52.831194 | orchestrator | 2025-02-19 09:00:52 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:52.831431 | orchestrator | 2025-02-19 09:00:52 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:52.833754 | orchestrator | 2025-02-19 09:00:52 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:00:52.834736 | orchestrator | 2025-02-19 09:00:52 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:00:55.883866 | orchestrator | 2025-02-19 09:00:52 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:55.883987 | orchestrator | 2025-02-19 09:00:55 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:55.884308 | orchestrator | 2025-02-19 09:00:55 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:55.885434 | orchestrator | 2025-02-19 09:00:55 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:00:55.886219 | orchestrator | 2025-02-19 09:00:55 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:00:58.944675 | orchestrator | 2025-02-19 09:00:55 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:00:58.944783 | orchestrator | 2025-02-19 09:00:58 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:00:58.945609 | orchestrator | 2025-02-19 09:00:58 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:00:58.945632 | orchestrator | 2025-02-19 09:00:58 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:00:58.946288 | orchestrator | 2025-02-19 09:00:58 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:00:58.946513 | orchestrator | 2025-02-19 09:00:58 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:02.011778 | orchestrator | 2025-02-19 09:01:02 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:02.014355 | orchestrator | 2025-02-19 09:01:02 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:02.014689 | orchestrator | 2025-02-19 09:01:02 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:02.015760 | orchestrator | 2025-02-19 09:01:02 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:05.072766 | orchestrator | 2025-02-19 09:01:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:05.072884 | orchestrator | 2025-02-19 09:01:05 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:05.073247 | orchestrator | 2025-02-19 09:01:05 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:05.076124 | orchestrator | 2025-02-19 09:01:05 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:05.077689 | orchestrator | 2025-02-19 09:01:05 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:08.131603 | orchestrator | 2025-02-19 09:01:05 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:08.131723 | orchestrator | 2025-02-19 09:01:08 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:08.134584 | orchestrator | 2025-02-19 09:01:08 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:08.139278 | orchestrator | 2025-02-19 09:01:08 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:11.198761 | orchestrator | 2025-02-19 09:01:08 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:11.198890 | orchestrator | 2025-02-19 09:01:08 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:11.198971 | orchestrator | 2025-02-19 09:01:11 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:11.199414 | orchestrator | 2025-02-19 09:01:11 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:11.201656 | orchestrator | 2025-02-19 09:01:11 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:11.202354 | orchestrator | 2025-02-19 09:01:11 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:14.265513 | orchestrator | 2025-02-19 09:01:11 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:14.265627 | orchestrator | 2025-02-19 09:01:14 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:14.268516 | orchestrator | 2025-02-19 09:01:14 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:14.272355 | orchestrator | 2025-02-19 09:01:14 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:14.274473 | orchestrator | 2025-02-19 09:01:14 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:17.328264 | orchestrator | 2025-02-19 09:01:14 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:17.328435 | orchestrator | 2025-02-19 09:01:17 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:17.328737 | orchestrator | 2025-02-19 09:01:17 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:17.329978 | orchestrator | 2025-02-19 09:01:17 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:17.331054 | orchestrator | 2025-02-19 09:01:17 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:20.366516 | orchestrator | 2025-02-19 09:01:17 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:20.366661 | orchestrator | 2025-02-19 09:01:20 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:20.366895 | orchestrator | 2025-02-19 09:01:20 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:20.367521 | orchestrator | 2025-02-19 09:01:20 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:20.368274 | orchestrator | 2025-02-19 09:01:20 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:23.424492 | orchestrator | 2025-02-19 09:01:20 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:23.424639 | orchestrator | 2025-02-19 09:01:23 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:23.426735 | orchestrator | 2025-02-19 09:01:23 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:23.435768 | orchestrator | 2025-02-19 09:01:23 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:26.494974 | orchestrator | 2025-02-19 09:01:23 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:26.495231 | orchestrator | 2025-02-19 09:01:23 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:26.495271 | orchestrator | 2025-02-19 09:01:26 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:26.497657 | orchestrator | 2025-02-19 09:01:26 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:26.499790 | orchestrator | 2025-02-19 09:01:26 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:26.503463 | orchestrator | 2025-02-19 09:01:26 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:26.504580 | orchestrator | 2025-02-19 09:01:26 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:29.571574 | orchestrator | 2025-02-19 09:01:29 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:29.573865 | orchestrator | 2025-02-19 09:01:29 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:29.574917 | orchestrator | 2025-02-19 09:01:29 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:29.574951 | orchestrator | 2025-02-19 09:01:29 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:32.620667 | orchestrator | 2025-02-19 09:01:29 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:32.620803 | orchestrator | 2025-02-19 09:01:32 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:32.621975 | orchestrator | 2025-02-19 09:01:32 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:32.624426 | orchestrator | 2025-02-19 09:01:32 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:32.626326 | orchestrator | 2025-02-19 09:01:32 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:35.679165 | orchestrator | 2025-02-19 09:01:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:35.679311 | orchestrator | 2025-02-19 09:01:35 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:35.680073 | orchestrator | 2025-02-19 09:01:35 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:35.680109 | orchestrator | 2025-02-19 09:01:35 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:35.680129 | orchestrator | 2025-02-19 09:01:35 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:38.716284 | orchestrator | 2025-02-19 09:01:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:38.716395 | orchestrator | 2025-02-19 09:01:38 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:38.717434 | orchestrator | 2025-02-19 09:01:38 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:38.718811 | orchestrator | 2025-02-19 09:01:38 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:38.719781 | orchestrator | 2025-02-19 09:01:38 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:38.719997 | orchestrator | 2025-02-19 09:01:38 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:41.770825 | orchestrator | 2025-02-19 09:01:41 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:41.774374 | orchestrator | 2025-02-19 09:01:41 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:41.778902 | orchestrator | 2025-02-19 09:01:41 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:41.781809 | orchestrator | 2025-02-19 09:01:41 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:44.828909 | orchestrator | 2025-02-19 09:01:41 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:44.829074 | orchestrator | 2025-02-19 09:01:44 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:44.832663 | orchestrator | 2025-02-19 09:01:44 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:44.832714 | orchestrator | 2025-02-19 09:01:44 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:44.833598 | orchestrator | 2025-02-19 09:01:44 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:47.879436 | orchestrator | 2025-02-19 09:01:44 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:47.879565 | orchestrator | 2025-02-19 09:01:47 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:47.880876 | orchestrator | 2025-02-19 09:01:47 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:47.882596 | orchestrator | 2025-02-19 09:01:47 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:47.884778 | orchestrator | 2025-02-19 09:01:47 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:50.947412 | orchestrator | 2025-02-19 09:01:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:50.947562 | orchestrator | 2025-02-19 09:01:50 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:50.948463 | orchestrator | 2025-02-19 09:01:50 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:50.950194 | orchestrator | 2025-02-19 09:01:50 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:50.952322 | orchestrator | 2025-02-19 09:01:50 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:54.007665 | orchestrator | 2025-02-19 09:01:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:54.007825 | orchestrator | 2025-02-19 09:01:54 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:54.009096 | orchestrator | 2025-02-19 09:01:54 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:54.009154 | orchestrator | 2025-02-19 09:01:54 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:54.011348 | orchestrator | 2025-02-19 09:01:54 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:01:57.056763 | orchestrator | 2025-02-19 09:01:54 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:01:57.056861 | orchestrator | 2025-02-19 09:01:57 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:01:57.057393 | orchestrator | 2025-02-19 09:01:57 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:01:57.058947 | orchestrator | 2025-02-19 09:01:57 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:01:57.061279 | orchestrator | 2025-02-19 09:01:57 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:00.119266 | orchestrator | 2025-02-19 09:01:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:00.119432 | orchestrator | 2025-02-19 09:02:00 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:00.124183 | orchestrator | 2025-02-19 09:02:00 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:00.124555 | orchestrator | 2025-02-19 09:02:00 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:00.127198 | orchestrator | 2025-02-19 09:02:00 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:03.171975 | orchestrator | 2025-02-19 09:02:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:03.172203 | orchestrator | 2025-02-19 09:02:03 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:03.174660 | orchestrator | 2025-02-19 09:02:03 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:03.179191 | orchestrator | 2025-02-19 09:02:03 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:03.186813 | orchestrator | 2025-02-19 09:02:03 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:06.241116 | orchestrator | 2025-02-19 09:02:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:06.241236 | orchestrator | 2025-02-19 09:02:06 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:06.243783 | orchestrator | 2025-02-19 09:02:06 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:06.249139 | orchestrator | 2025-02-19 09:02:06 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:06.250925 | orchestrator | 2025-02-19 09:02:06 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:09.336257 | orchestrator | 2025-02-19 09:02:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:09.336418 | orchestrator | 2025-02-19 09:02:09 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:09.339805 | orchestrator | 2025-02-19 09:02:09 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:09.344375 | orchestrator | 2025-02-19 09:02:09 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:09.345856 | orchestrator | 2025-02-19 09:02:09 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:12.397666 | orchestrator | 2025-02-19 09:02:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:12.397837 | orchestrator | 2025-02-19 09:02:12 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:12.399476 | orchestrator | 2025-02-19 09:02:12 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:12.400873 | orchestrator | 2025-02-19 09:02:12 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:12.403728 | orchestrator | 2025-02-19 09:02:12 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:15.448970 | orchestrator | 2025-02-19 09:02:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:15.449164 | orchestrator | 2025-02-19 09:02:15 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:15.449692 | orchestrator | 2025-02-19 09:02:15 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:15.449742 | orchestrator | 2025-02-19 09:02:15 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:15.450479 | orchestrator | 2025-02-19 09:02:15 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:18.496001 | orchestrator | 2025-02-19 09:02:15 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:18.496260 | orchestrator | 2025-02-19 09:02:18 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:18.500633 | orchestrator | 2025-02-19 09:02:18 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:18.504142 | orchestrator | 2025-02-19 09:02:18 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:18.508607 | orchestrator | 2025-02-19 09:02:18 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:21.547127 | orchestrator | 2025-02-19 09:02:18 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:21.547309 | orchestrator | 2025-02-19 09:02:21 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:21.547968 | orchestrator | 2025-02-19 09:02:21 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:21.549722 | orchestrator | 2025-02-19 09:02:21 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:21.550777 | orchestrator | 2025-02-19 09:02:21 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:24.598112 | orchestrator | 2025-02-19 09:02:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:24.598440 | orchestrator | 2025-02-19 09:02:24 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:24.598844 | orchestrator | 2025-02-19 09:02:24 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:24.598875 | orchestrator | 2025-02-19 09:02:24 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:24.598896 | orchestrator | 2025-02-19 09:02:24 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:27.631653 | orchestrator | 2025-02-19 09:02:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:27.631813 | orchestrator | 2025-02-19 09:02:27 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:27.633434 | orchestrator | 2025-02-19 09:02:27 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:27.633494 | orchestrator | 2025-02-19 09:02:27 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:27.633518 | orchestrator | 2025-02-19 09:02:27 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:30.677159 | orchestrator | 2025-02-19 09:02:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:30.677308 | orchestrator | 2025-02-19 09:02:30 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:30.678457 | orchestrator | 2025-02-19 09:02:30 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:30.679019 | orchestrator | 2025-02-19 09:02:30 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:30.679790 | orchestrator | 2025-02-19 09:02:30 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:30.680010 | orchestrator | 2025-02-19 09:02:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:33.722712 | orchestrator | 2025-02-19 09:02:33 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:33.723720 | orchestrator | 2025-02-19 09:02:33 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:33.727840 | orchestrator | 2025-02-19 09:02:33 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:33.729526 | orchestrator | 2025-02-19 09:02:33 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:36.775259 | orchestrator | 2025-02-19 09:02:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:36.775406 | orchestrator | 2025-02-19 09:02:36 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:36.776862 | orchestrator | 2025-02-19 09:02:36 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:36.777792 | orchestrator | 2025-02-19 09:02:36 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:36.779049 | orchestrator | 2025-02-19 09:02:36 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:36.779202 | orchestrator | 2025-02-19 09:02:36 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:39.828867 | orchestrator | 2025-02-19 09:02:39 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:39.829301 | orchestrator | 2025-02-19 09:02:39 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:39.830154 | orchestrator | 2025-02-19 09:02:39 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:39.830792 | orchestrator | 2025-02-19 09:02:39 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:42.872997 | orchestrator | 2025-02-19 09:02:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:42.873177 | orchestrator | 2025-02-19 09:02:42 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:42.873697 | orchestrator | 2025-02-19 09:02:42 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:42.873726 | orchestrator | 2025-02-19 09:02:42 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:42.874655 | orchestrator | 2025-02-19 09:02:42 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:45.916588 | orchestrator | 2025-02-19 09:02:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:45.916711 | orchestrator | 2025-02-19 09:02:45 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:45.917556 | orchestrator | 2025-02-19 09:02:45 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:45.918344 | orchestrator | 2025-02-19 09:02:45 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:45.919356 | orchestrator | 2025-02-19 09:02:45 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:48.960418 | orchestrator | 2025-02-19 09:02:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:48.960570 | orchestrator | 2025-02-19 09:02:48 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:48.960740 | orchestrator | 2025-02-19 09:02:48 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:48.960773 | orchestrator | 2025-02-19 09:02:48 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:48.962710 | orchestrator | 2025-02-19 09:02:48 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:48.963281 | orchestrator | 2025-02-19 09:02:48 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:51.999948 | orchestrator | 2025-02-19 09:02:51 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:52.000966 | orchestrator | 2025-02-19 09:02:51 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:52.001773 | orchestrator | 2025-02-19 09:02:51 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:52.003409 | orchestrator | 2025-02-19 09:02:52 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:52.004459 | orchestrator | 2025-02-19 09:02:52 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:55.044052 | orchestrator | 2025-02-19 09:02:55 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:55.045322 | orchestrator | 2025-02-19 09:02:55 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:55.046806 | orchestrator | 2025-02-19 09:02:55 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:55.048100 | orchestrator | 2025-02-19 09:02:55 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:02:58.111978 | orchestrator | 2025-02-19 09:02:55 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:02:58.112140 | orchestrator | 2025-02-19 09:02:58 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:02:58.113532 | orchestrator | 2025-02-19 09:02:58 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:02:58.116530 | orchestrator | 2025-02-19 09:02:58 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state STARTED 2025-02-19 09:02:58.118715 | orchestrator | 2025-02-19 09:02:58 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:01.170688 | orchestrator | 2025-02-19 09:02:58 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:01.170812 | orchestrator | 2025-02-19 09:03:01 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:01.173788 | orchestrator | 2025-02-19 09:03:01 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:01.174749 | orchestrator | 2025-02-19 09:03:01 | INFO  | Task 5a41324d-841c-437a-81a8-48fd8a6e5d15 is in state SUCCESS 2025-02-19 09:03:01.183047 | orchestrator | 2025-02-19 09:03:01.183157 | orchestrator | 2025-02-19 09:03:01.183172 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:03:01.183186 | orchestrator | 2025-02-19 09:03:01.183199 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:03:01.183228 | orchestrator | Wednesday 19 February 2025 09:00:44 +0000 (0:00:00.395) 0:00:00.395 **** 2025-02-19 09:03:01.183243 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:03:01.183258 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:03:01.183271 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:03:01.183284 | orchestrator | 2025-02-19 09:03:01.183296 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:03:01.183309 | orchestrator | Wednesday 19 February 2025 09:00:45 +0000 (0:00:00.501) 0:00:00.896 **** 2025-02-19 09:03:01.183323 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-02-19 09:03:01.183336 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-02-19 09:03:01.183348 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-02-19 09:03:01.183361 | orchestrator | 2025-02-19 09:03:01.183373 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-02-19 09:03:01.183386 | orchestrator | 2025-02-19 09:03:01.183398 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-19 09:03:01.183411 | orchestrator | Wednesday 19 February 2025 09:00:45 +0000 (0:00:00.391) 0:00:01.287 **** 2025-02-19 09:03:01.183423 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:03:01.183456 | orchestrator | 2025-02-19 09:03:01.183469 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-02-19 09:03:01.183481 | orchestrator | Wednesday 19 February 2025 09:00:46 +0000 (0:00:00.956) 0:00:02.244 **** 2025-02-19 09:03:01.183494 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-19 09:03:01.183506 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-19 09:03:01.183519 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-19 09:03:01.183531 | orchestrator | 2025-02-19 09:03:01.183543 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-02-19 09:03:01.183556 | orchestrator | Wednesday 19 February 2025 09:00:47 +0000 (0:00:01.026) 0:00:03.271 **** 2025-02-19 09:03:01.183572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.183589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.183642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.183661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.183684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.183700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.183724 | orchestrator | 2025-02-19 09:03:01.183739 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-19 09:03:01.183754 | orchestrator | Wednesday 19 February 2025 09:00:49 +0000 (0:00:02.096) 0:00:05.368 **** 2025-02-19 09:03:01.183767 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:03:01.183779 | orchestrator | 2025-02-19 09:03:01.183792 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-02-19 09:03:01.183805 | orchestrator | Wednesday 19 February 2025 09:00:51 +0000 (0:00:01.778) 0:00:07.147 **** 2025-02-19 09:03:01.183826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.183845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.183859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.183872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.183900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.183914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.183933 | orchestrator | 2025-02-19 09:03:01.183945 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-02-19 09:03:01.183958 | orchestrator | Wednesday 19 February 2025 09:00:56 +0000 (0:00:04.522) 0:00:11.669 **** 2025-02-19 09:03:01.183971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-19 09:03:01.183992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-19 09:03:01.184006 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:03:01.184019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-19 09:03:01.184039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-19 09:03:01.184067 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:03:01.184106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-19 09:03:01.184121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-19 09:03:01.184134 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:03:01.184146 | orchestrator | 2025-02-19 09:03:01.184159 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-02-19 09:03:01.184172 | orchestrator | Wednesday 19 February 2025 09:00:57 +0000 (0:00:01.265) 0:00:12.935 **** 2025-02-19 09:03:01.184185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-19 09:03:01.184212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-19 09:03:01.184226 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:03:01.184239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-19 09:03:01.184262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-19 09:03:01.184275 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:03:01.184288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-19 09:03:01.184315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-19 09:03:01.184328 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:03:01.184341 | orchestrator | 2025-02-19 09:03:01.184353 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-02-19 09:03:01.184366 | orchestrator | Wednesday 19 February 2025 09:00:59 +0000 (0:00:01.905) 0:00:14.841 **** 2025-02-19 09:03:01.184378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.184391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.184418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.184439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.184459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.184481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.184494 | orchestrator | 2025-02-19 09:03:01.184507 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-02-19 09:03:01.184519 | orchestrator | Wednesday 19 February 2025 09:01:02 +0000 (0:00:03.541) 0:00:18.382 **** 2025-02-19 09:03:01.184531 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:03:01.184544 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:03:01.184556 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:03:01.184568 | orchestrator | 2025-02-19 09:03:01.184585 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-02-19 09:03:01.184606 | orchestrator | Wednesday 19 February 2025 09:01:07 +0000 (0:00:04.561) 0:00:22.944 **** 2025-02-19 09:03:01.184625 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:03:01.184646 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:03:01.184664 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:03:01.184691 | orchestrator | 2025-02-19 09:03:01.184711 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-02-19 09:03:01.184729 | orchestrator | Wednesday 19 February 2025 09:01:09 +0000 (0:00:02.595) 0:00:25.539 **** 2025-02-19 09:03:01.184751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.184783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.184806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-19 09:03:01.184840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.184862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.184904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-19 09:03:01.184926 | orchestrator | 2025-02-19 09:03:01.184947 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-19 09:03:01.184969 | orchestrator | Wednesday 19 February 2025 09:01:14 +0000 (0:00:04.174) 0:00:29.713 **** 2025-02-19 09:03:01.184990 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:03:01.185010 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:03:01.185023 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:03:01.185036 | orchestrator | 2025-02-19 09:03:01.185049 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-19 09:03:01.185067 | orchestrator | Wednesday 19 February 2025 09:01:15 +0000 (0:00:00.949) 0:00:30.663 **** 2025-02-19 09:03:01.185110 | orchestrator | 2025-02-19 09:03:01.185124 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-19 09:03:01.185136 | orchestrator | Wednesday 19 February 2025 09:01:15 +0000 (0:00:00.691) 0:00:31.355 **** 2025-02-19 09:03:01.185149 | orchestrator | 2025-02-19 09:03:01.185162 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-19 09:03:01.185174 | orchestrator | Wednesday 19 February 2025 09:01:15 +0000 (0:00:00.259) 0:00:31.615 **** 2025-02-19 09:03:01.185187 | orchestrator | 2025-02-19 09:03:01.185199 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-02-19 09:03:01.185212 | orchestrator | Wednesday 19 February 2025 09:01:16 +0000 (0:00:00.221) 0:00:31.836 **** 2025-02-19 09:03:01.185224 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:03:01.185237 | orchestrator | 2025-02-19 09:03:01.185250 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-02-19 09:03:01.185262 | orchestrator | Wednesday 19 February 2025 09:01:16 +0000 (0:00:00.337) 0:00:32.174 **** 2025-02-19 09:03:01.185280 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:03:01.185300 | orchestrator | 2025-02-19 09:03:01.185321 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-02-19 09:03:01.185340 | orchestrator | Wednesday 19 February 2025 09:01:17 +0000 (0:00:00.705) 0:00:32.879 **** 2025-02-19 09:03:01.185360 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:03:01.185390 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:03:01.185411 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:03:01.185430 | orchestrator | 2025-02-19 09:03:01.185450 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-02-19 09:03:01.185471 | orchestrator | Wednesday 19 February 2025 09:02:01 +0000 (0:00:44.203) 0:01:17.083 **** 2025-02-19 09:03:01.185490 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:03:01.185510 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:03:01.185531 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:03:01.185552 | orchestrator | 2025-02-19 09:03:01.185572 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-19 09:03:01.185591 | orchestrator | Wednesday 19 February 2025 09:02:43 +0000 (0:00:42.299) 0:01:59.382 **** 2025-02-19 09:03:01.185610 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:03:01.185631 | orchestrator | 2025-02-19 09:03:01.185653 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-02-19 09:03:01.185673 | orchestrator | Wednesday 19 February 2025 09:02:44 +0000 (0:00:00.746) 0:02:00.128 **** 2025-02-19 09:03:01.185692 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:03:01.185712 | orchestrator | 2025-02-19 09:03:01.185731 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-02-19 09:03:01.185751 | orchestrator | Wednesday 19 February 2025 09:02:48 +0000 (0:00:03.567) 0:02:03.696 **** 2025-02-19 09:03:01.185772 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:03:01.185792 | orchestrator | 2025-02-19 09:03:01.185812 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-02-19 09:03:01.185833 | orchestrator | Wednesday 19 February 2025 09:02:51 +0000 (0:00:03.359) 0:02:07.055 **** 2025-02-19 09:03:01.185851 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:03:01.185871 | orchestrator | 2025-02-19 09:03:01.185890 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-02-19 09:03:01.185908 | orchestrator | Wednesday 19 February 2025 09:02:54 +0000 (0:00:02.743) 0:02:09.799 **** 2025-02-19 09:03:01.185929 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:03:01.185949 | orchestrator | 2025-02-19 09:03:01.185969 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:03:01.185990 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 09:03:01.186012 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-19 09:03:01.186150 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-19 09:03:01.186320 | orchestrator | 2025-02-19 09:03:01.186350 | orchestrator | 2025-02-19 09:03:01.186371 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:03:01.186406 | orchestrator | Wednesday 19 February 2025 09:02:57 +0000 (0:00:03.570) 0:02:13.369 **** 2025-02-19 09:03:04.222722 | orchestrator | =============================================================================== 2025-02-19 09:03:04.222869 | orchestrator | opensearch : Restart opensearch container ------------------------------ 44.20s 2025-02-19 09:03:04.222901 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 42.30s 2025-02-19 09:03:04.222926 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.56s 2025-02-19 09:03:04.222949 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 4.52s 2025-02-19 09:03:04.222974 | orchestrator | opensearch : Check opensearch containers -------------------------------- 4.17s 2025-02-19 09:03:04.223022 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.57s 2025-02-19 09:03:04.223039 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.57s 2025-02-19 09:03:04.223131 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.54s 2025-02-19 09:03:04.223159 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.36s 2025-02-19 09:03:04.223175 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.74s 2025-02-19 09:03:04.223188 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.60s 2025-02-19 09:03:04.223202 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.10s 2025-02-19 09:03:04.223216 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.91s 2025-02-19 09:03:04.223230 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.78s 2025-02-19 09:03:04.223244 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.27s 2025-02-19 09:03:04.223260 | orchestrator | opensearch : Flush handlers --------------------------------------------- 1.17s 2025-02-19 09:03:04.223274 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.03s 2025-02-19 09:03:04.223290 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.96s 2025-02-19 09:03:04.223306 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.95s 2025-02-19 09:03:04.223323 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.75s 2025-02-19 09:03:04.223340 | orchestrator | 2025-02-19 09:03:01 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:04.223357 | orchestrator | 2025-02-19 09:03:01 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:04.223392 | orchestrator | 2025-02-19 09:03:04 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:04.223686 | orchestrator | 2025-02-19 09:03:04 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:04.223723 | orchestrator | 2025-02-19 09:03:04 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:07.261262 | orchestrator | 2025-02-19 09:03:04 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:07.261502 | orchestrator | 2025-02-19 09:03:07 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:07.263062 | orchestrator | 2025-02-19 09:03:07 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:07.263128 | orchestrator | 2025-02-19 09:03:07 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:10.305378 | orchestrator | 2025-02-19 09:03:07 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:10.305550 | orchestrator | 2025-02-19 09:03:10 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:10.305753 | orchestrator | 2025-02-19 09:03:10 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:10.307342 | orchestrator | 2025-02-19 09:03:10 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:13.349297 | orchestrator | 2025-02-19 09:03:10 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:13.349441 | orchestrator | 2025-02-19 09:03:13 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:13.349977 | orchestrator | 2025-02-19 09:03:13 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:13.351054 | orchestrator | 2025-02-19 09:03:13 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:13.351199 | orchestrator | 2025-02-19 09:03:13 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:16.392587 | orchestrator | 2025-02-19 09:03:16 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:16.392873 | orchestrator | 2025-02-19 09:03:16 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:16.394749 | orchestrator | 2025-02-19 09:03:16 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:19.432459 | orchestrator | 2025-02-19 09:03:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:19.432603 | orchestrator | 2025-02-19 09:03:19 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:19.433788 | orchestrator | 2025-02-19 09:03:19 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:19.435480 | orchestrator | 2025-02-19 09:03:19 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:22.482322 | orchestrator | 2025-02-19 09:03:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:22.482573 | orchestrator | 2025-02-19 09:03:22 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:22.483514 | orchestrator | 2025-02-19 09:03:22 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:22.483555 | orchestrator | 2025-02-19 09:03:22 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:25.525825 | orchestrator | 2025-02-19 09:03:22 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:25.525928 | orchestrator | 2025-02-19 09:03:25 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:25.526284 | orchestrator | 2025-02-19 09:03:25 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:25.527853 | orchestrator | 2025-02-19 09:03:25 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:28.570773 | orchestrator | 2025-02-19 09:03:25 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:28.570930 | orchestrator | 2025-02-19 09:03:28 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:28.572896 | orchestrator | 2025-02-19 09:03:28 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:31.612567 | orchestrator | 2025-02-19 09:03:28 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:31.612767 | orchestrator | 2025-02-19 09:03:28 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:31.613027 | orchestrator | 2025-02-19 09:03:31 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:31.613071 | orchestrator | 2025-02-19 09:03:31 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:31.614707 | orchestrator | 2025-02-19 09:03:31 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:34.661233 | orchestrator | 2025-02-19 09:03:31 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:34.661393 | orchestrator | 2025-02-19 09:03:34 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:34.662472 | orchestrator | 2025-02-19 09:03:34 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:34.663063 | orchestrator | 2025-02-19 09:03:34 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:34.663583 | orchestrator | 2025-02-19 09:03:34 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:37.707147 | orchestrator | 2025-02-19 09:03:37 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:37.707833 | orchestrator | 2025-02-19 09:03:37 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:37.708802 | orchestrator | 2025-02-19 09:03:37 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:40.772827 | orchestrator | 2025-02-19 09:03:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:40.772975 | orchestrator | 2025-02-19 09:03:40 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:40.774810 | orchestrator | 2025-02-19 09:03:40 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:40.777119 | orchestrator | 2025-02-19 09:03:40 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:43.836606 | orchestrator | 2025-02-19 09:03:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:43.836714 | orchestrator | 2025-02-19 09:03:43 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:43.839948 | orchestrator | 2025-02-19 09:03:43 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:43.844539 | orchestrator | 2025-02-19 09:03:43 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:46.887773 | orchestrator | 2025-02-19 09:03:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:46.887931 | orchestrator | 2025-02-19 09:03:46 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:46.892872 | orchestrator | 2025-02-19 09:03:46 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:46.895758 | orchestrator | 2025-02-19 09:03:46 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:49.936900 | orchestrator | 2025-02-19 09:03:46 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:49.937046 | orchestrator | 2025-02-19 09:03:49 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:49.937655 | orchestrator | 2025-02-19 09:03:49 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:49.939368 | orchestrator | 2025-02-19 09:03:49 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:52.977492 | orchestrator | 2025-02-19 09:03:49 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:52.977631 | orchestrator | 2025-02-19 09:03:52 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:52.979326 | orchestrator | 2025-02-19 09:03:52 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:52.980301 | orchestrator | 2025-02-19 09:03:52 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:56.027947 | orchestrator | 2025-02-19 09:03:52 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:56.028055 | orchestrator | 2025-02-19 09:03:56 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:56.030608 | orchestrator | 2025-02-19 09:03:56 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:56.033789 | orchestrator | 2025-02-19 09:03:56 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:03:59.086345 | orchestrator | 2025-02-19 09:03:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:03:59.086473 | orchestrator | 2025-02-19 09:03:59 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:03:59.087459 | orchestrator | 2025-02-19 09:03:59 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:03:59.089648 | orchestrator | 2025-02-19 09:03:59 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:02.126975 | orchestrator | 2025-02-19 09:03:59 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:02.127201 | orchestrator | 2025-02-19 09:04:02 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:02.128911 | orchestrator | 2025-02-19 09:04:02 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:02.130708 | orchestrator | 2025-02-19 09:04:02 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:05.174917 | orchestrator | 2025-02-19 09:04:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:05.175090 | orchestrator | 2025-02-19 09:04:05 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:05.176279 | orchestrator | 2025-02-19 09:04:05 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:05.178581 | orchestrator | 2025-02-19 09:04:05 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:05.179047 | orchestrator | 2025-02-19 09:04:05 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:08.222466 | orchestrator | 2025-02-19 09:04:08 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:11.270315 | orchestrator | 2025-02-19 09:04:08 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:11.270428 | orchestrator | 2025-02-19 09:04:08 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:11.270445 | orchestrator | 2025-02-19 09:04:08 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:11.270475 | orchestrator | 2025-02-19 09:04:11 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:11.271519 | orchestrator | 2025-02-19 09:04:11 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:11.272432 | orchestrator | 2025-02-19 09:04:11 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:14.312396 | orchestrator | 2025-02-19 09:04:11 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:14.312511 | orchestrator | 2025-02-19 09:04:14 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:14.313243 | orchestrator | 2025-02-19 09:04:14 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:14.314976 | orchestrator | 2025-02-19 09:04:14 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:17.354184 | orchestrator | 2025-02-19 09:04:14 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:17.354282 | orchestrator | 2025-02-19 09:04:17 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:17.354919 | orchestrator | 2025-02-19 09:04:17 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:17.356001 | orchestrator | 2025-02-19 09:04:17 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:20.413297 | orchestrator | 2025-02-19 09:04:17 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:20.413440 | orchestrator | 2025-02-19 09:04:20 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:20.414403 | orchestrator | 2025-02-19 09:04:20 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:20.417415 | orchestrator | 2025-02-19 09:04:20 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:23.461450 | orchestrator | 2025-02-19 09:04:20 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:23.461616 | orchestrator | 2025-02-19 09:04:23 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:26.510532 | orchestrator | 2025-02-19 09:04:23 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:26.510658 | orchestrator | 2025-02-19 09:04:23 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:26.510690 | orchestrator | 2025-02-19 09:04:23 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:26.510737 | orchestrator | 2025-02-19 09:04:26 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:26.513616 | orchestrator | 2025-02-19 09:04:26 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:26.515503 | orchestrator | 2025-02-19 09:04:26 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:29.565065 | orchestrator | 2025-02-19 09:04:26 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:29.565263 | orchestrator | 2025-02-19 09:04:29 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:29.566083 | orchestrator | 2025-02-19 09:04:29 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:29.567330 | orchestrator | 2025-02-19 09:04:29 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:32.605294 | orchestrator | 2025-02-19 09:04:29 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:32.605436 | orchestrator | 2025-02-19 09:04:32 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:32.606625 | orchestrator | 2025-02-19 09:04:32 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:32.608797 | orchestrator | 2025-02-19 09:04:32 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:35.648785 | orchestrator | 2025-02-19 09:04:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:35.648913 | orchestrator | 2025-02-19 09:04:35 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:35.649626 | orchestrator | 2025-02-19 09:04:35 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:35.651639 | orchestrator | 2025-02-19 09:04:35 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:38.698476 | orchestrator | 2025-02-19 09:04:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:38.698639 | orchestrator | 2025-02-19 09:04:38 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:38.699923 | orchestrator | 2025-02-19 09:04:38 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:38.702718 | orchestrator | 2025-02-19 09:04:38 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:41.753243 | orchestrator | 2025-02-19 09:04:38 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:41.753381 | orchestrator | 2025-02-19 09:04:41 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:41.754758 | orchestrator | 2025-02-19 09:04:41 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:41.754818 | orchestrator | 2025-02-19 09:04:41 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:44.798301 | orchestrator | 2025-02-19 09:04:41 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:44.798433 | orchestrator | 2025-02-19 09:04:44 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:44.800189 | orchestrator | 2025-02-19 09:04:44 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:44.801928 | orchestrator | 2025-02-19 09:04:44 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:47.846956 | orchestrator | 2025-02-19 09:04:44 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:47.847097 | orchestrator | 2025-02-19 09:04:47 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:47.848007 | orchestrator | 2025-02-19 09:04:47 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:47.848051 | orchestrator | 2025-02-19 09:04:47 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:50.902961 | orchestrator | 2025-02-19 09:04:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:50.903077 | orchestrator | 2025-02-19 09:04:50 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:50.904868 | orchestrator | 2025-02-19 09:04:50 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state STARTED 2025-02-19 09:04:50.908640 | orchestrator | 2025-02-19 09:04:50 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:53.957958 | orchestrator | 2025-02-19 09:04:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:54.137859 | orchestrator | 2025-02-19 09:04:53 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:04:54.137921 | orchestrator | 2025-02-19 09:04:53 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:54.137933 | orchestrator | 2025-02-19 09:04:53 | INFO  | Task 80218bc5-49d4-4231-814d-aac7ca1eb44c is in state SUCCESS 2025-02-19 09:04:54.137943 | orchestrator | 2025-02-19 09:04:54.137953 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-19 09:04:54.137962 | orchestrator | 2025-02-19 09:04:54.137972 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-02-19 09:04:54.137982 | orchestrator | 2025-02-19 09:04:54.137992 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-02-19 09:04:54.138002 | orchestrator | Wednesday 19 February 2025 08:49:35 +0000 (0:00:01.955) 0:00:01.955 **** 2025-02-19 09:04:54.138012 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.138070 | orchestrator | 2025-02-19 09:04:54.138080 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-02-19 09:04:54.138089 | orchestrator | Wednesday 19 February 2025 08:49:36 +0000 (0:00:01.360) 0:00:03.315 **** 2025-02-19 09:04:54.138100 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-02-19 09:04:54.138110 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-02-19 09:04:54.138119 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-02-19 09:04:54.138160 | orchestrator | 2025-02-19 09:04:54.138170 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-02-19 09:04:54.138179 | orchestrator | Wednesday 19 February 2025 08:49:37 +0000 (0:00:00.740) 0:00:04.055 **** 2025-02-19 09:04:54.138190 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.138200 | orchestrator | 2025-02-19 09:04:54.138209 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-02-19 09:04:54.138247 | orchestrator | Wednesday 19 February 2025 08:49:38 +0000 (0:00:01.184) 0:00:05.240 **** 2025-02-19 09:04:54.138257 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.138268 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.138277 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.138286 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.138296 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.138305 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.138315 | orchestrator | 2025-02-19 09:04:54.138324 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-02-19 09:04:54.138334 | orchestrator | Wednesday 19 February 2025 08:49:39 +0000 (0:00:01.471) 0:00:06.711 **** 2025-02-19 09:04:54.138343 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.138353 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.138363 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.138372 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.138388 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.138419 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.138436 | orchestrator | 2025-02-19 09:04:54.138452 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-02-19 09:04:54.138469 | orchestrator | Wednesday 19 February 2025 08:49:40 +0000 (0:00:00.767) 0:00:07.478 **** 2025-02-19 09:04:54.138486 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.138502 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.138512 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.138521 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.138530 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.138539 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.138548 | orchestrator | 2025-02-19 09:04:54.138558 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-02-19 09:04:54.138567 | orchestrator | Wednesday 19 February 2025 08:49:41 +0000 (0:00:01.203) 0:00:08.682 **** 2025-02-19 09:04:54.138576 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.138585 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.138594 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.138603 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.138613 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.138622 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.138631 | orchestrator | 2025-02-19 09:04:54.138640 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-02-19 09:04:54.138654 | orchestrator | Wednesday 19 February 2025 08:49:43 +0000 (0:00:01.246) 0:00:09.929 **** 2025-02-19 09:04:54.138663 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.138673 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.138682 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.138691 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.138700 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.138709 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.138718 | orchestrator | 2025-02-19 09:04:54.138727 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-02-19 09:04:54.138736 | orchestrator | Wednesday 19 February 2025 08:49:44 +0000 (0:00:01.106) 0:00:11.036 **** 2025-02-19 09:04:54.138745 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.138755 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.138764 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.138773 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.138782 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.138791 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.138800 | orchestrator | 2025-02-19 09:04:54.138810 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-02-19 09:04:54.138819 | orchestrator | Wednesday 19 February 2025 08:49:46 +0000 (0:00:02.130) 0:00:13.166 **** 2025-02-19 09:04:54.138828 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.138839 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.138848 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.138857 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.138872 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.138882 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.138891 | orchestrator | 2025-02-19 09:04:54.138917 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-02-19 09:04:54.138927 | orchestrator | Wednesday 19 February 2025 08:49:47 +0000 (0:00:01.290) 0:00:14.457 **** 2025-02-19 09:04:54.138937 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.138946 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.138955 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.138964 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.138973 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.138982 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.138992 | orchestrator | 2025-02-19 09:04:54.139001 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-02-19 09:04:54.139010 | orchestrator | Wednesday 19 February 2025 08:49:49 +0000 (0:00:01.501) 0:00:15.959 **** 2025-02-19 09:04:54.139020 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-19 09:04:54.139029 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-19 09:04:54.139038 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-19 09:04:54.139048 | orchestrator | 2025-02-19 09:04:54.139057 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-02-19 09:04:54.139066 | orchestrator | Wednesday 19 February 2025 08:49:50 +0000 (0:00:01.129) 0:00:17.088 **** 2025-02-19 09:04:54.139075 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.139084 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.139093 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.139102 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.139111 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.139120 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.139147 | orchestrator | 2025-02-19 09:04:54.139157 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-02-19 09:04:54.139167 | orchestrator | Wednesday 19 February 2025 08:49:51 +0000 (0:00:01.606) 0:00:18.695 **** 2025-02-19 09:04:54.139176 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-02-19 09:04:54.139185 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-19 09:04:54.139195 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-19 09:04:54.139204 | orchestrator | 2025-02-19 09:04:54.139213 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-02-19 09:04:54.139222 | orchestrator | Wednesday 19 February 2025 08:49:54 +0000 (0:00:03.099) 0:00:21.794 **** 2025-02-19 09:04:54.139232 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-19 09:04:54.139241 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-19 09:04:54.139250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-19 09:04:54.139260 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.139269 | orchestrator | 2025-02-19 09:04:54.139278 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-02-19 09:04:54.139288 | orchestrator | Wednesday 19 February 2025 08:49:56 +0000 (0:00:01.180) 0:00:22.975 **** 2025-02-19 09:04:54.139298 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-19 09:04:54.139309 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-19 09:04:54.139318 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-19 09:04:54.139333 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.139343 | orchestrator | 2025-02-19 09:04:54.139352 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-02-19 09:04:54.139361 | orchestrator | Wednesday 19 February 2025 08:49:57 +0000 (0:00:01.584) 0:00:24.560 **** 2025-02-19 09:04:54.139372 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-19 09:04:54.139389 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-19 09:04:54.139417 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-19 09:04:54.139433 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.139449 | orchestrator | 2025-02-19 09:04:54.139464 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-02-19 09:04:54.139480 | orchestrator | Wednesday 19 February 2025 08:49:58 +0000 (0:00:00.393) 0:00:24.954 **** 2025-02-19 09:04:54.139492 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-02-19 08:49:52.531542', 'end': '2025-02-19 08:49:52.779123', 'delta': '0:00:00.247581', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-19 09:04:54.139507 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-02-19 08:49:53.516192', 'end': '2025-02-19 08:49:53.775689', 'delta': '0:00:00.259497', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-19 09:04:54.139517 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-02-19 08:49:54.416165', 'end': '2025-02-19 08:49:54.707990', 'delta': '0:00:00.291825', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-19 09:04:54.139533 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.139542 | orchestrator | 2025-02-19 09:04:54.139552 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-02-19 09:04:54.139561 | orchestrator | Wednesday 19 February 2025 08:49:58 +0000 (0:00:00.271) 0:00:25.225 **** 2025-02-19 09:04:54.139571 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.139581 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.139590 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.139599 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.139613 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.139622 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.139631 | orchestrator | 2025-02-19 09:04:54.139640 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-02-19 09:04:54.139649 | orchestrator | Wednesday 19 February 2025 08:50:00 +0000 (0:00:02.198) 0:00:27.424 **** 2025-02-19 09:04:54.139658 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.139667 | orchestrator | 2025-02-19 09:04:54.139677 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-02-19 09:04:54.139686 | orchestrator | Wednesday 19 February 2025 08:50:01 +0000 (0:00:00.882) 0:00:28.307 **** 2025-02-19 09:04:54.139695 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.139704 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.139713 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.139722 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.139731 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.139741 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.139750 | orchestrator | 2025-02-19 09:04:54.139762 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-02-19 09:04:54.139772 | orchestrator | Wednesday 19 February 2025 08:50:03 +0000 (0:00:01.930) 0:00:30.238 **** 2025-02-19 09:04:54.139781 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.139790 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.139799 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.139808 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.139818 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.139827 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.139836 | orchestrator | 2025-02-19 09:04:54.139845 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-19 09:04:54.139854 | orchestrator | Wednesday 19 February 2025 08:50:05 +0000 (0:00:01.833) 0:00:32.071 **** 2025-02-19 09:04:54.139863 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.139878 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.139888 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.139898 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.139907 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.139916 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.139925 | orchestrator | 2025-02-19 09:04:54.139935 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-02-19 09:04:54.139944 | orchestrator | Wednesday 19 February 2025 08:50:06 +0000 (0:00:00.917) 0:00:32.989 **** 2025-02-19 09:04:54.139953 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.139962 | orchestrator | 2025-02-19 09:04:54.139972 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-02-19 09:04:54.139981 | orchestrator | Wednesday 19 February 2025 08:50:06 +0000 (0:00:00.725) 0:00:33.715 **** 2025-02-19 09:04:54.139990 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.140000 | orchestrator | 2025-02-19 09:04:54.140009 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-19 09:04:54.140018 | orchestrator | Wednesday 19 February 2025 08:50:07 +0000 (0:00:00.298) 0:00:34.013 **** 2025-02-19 09:04:54.140032 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.140041 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.140050 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.140060 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.140069 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.140078 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.140087 | orchestrator | 2025-02-19 09:04:54.140097 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-02-19 09:04:54.140106 | orchestrator | Wednesday 19 February 2025 08:50:08 +0000 (0:00:00.997) 0:00:35.011 **** 2025-02-19 09:04:54.140115 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.140147 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.140157 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.140166 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.140175 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.140184 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.140193 | orchestrator | 2025-02-19 09:04:54.140203 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-02-19 09:04:54.140212 | orchestrator | Wednesday 19 February 2025 08:50:09 +0000 (0:00:01.174) 0:00:36.186 **** 2025-02-19 09:04:54.140221 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.140230 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.140239 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.140248 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.140258 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.140267 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.140276 | orchestrator | 2025-02-19 09:04:54.140285 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-02-19 09:04:54.140294 | orchestrator | Wednesday 19 February 2025 08:50:10 +0000 (0:00:01.085) 0:00:37.272 **** 2025-02-19 09:04:54.140304 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.140313 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.140327 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.140342 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.140357 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.140373 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.140387 | orchestrator | 2025-02-19 09:04:54.140403 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-02-19 09:04:54.140417 | orchestrator | Wednesday 19 February 2025 08:50:11 +0000 (0:00:01.336) 0:00:38.608 **** 2025-02-19 09:04:54.140433 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.140450 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.140465 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.140480 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.140490 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.140502 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.140511 | orchestrator | 2025-02-19 09:04:54.140521 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-02-19 09:04:54.140530 | orchestrator | Wednesday 19 February 2025 08:50:12 +0000 (0:00:00.928) 0:00:39.537 **** 2025-02-19 09:04:54.140540 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.140549 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.140558 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.140572 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.140588 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.140602 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.140619 | orchestrator | 2025-02-19 09:04:54.140635 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-02-19 09:04:54.140653 | orchestrator | Wednesday 19 February 2025 08:50:14 +0000 (0:00:02.047) 0:00:41.584 **** 2025-02-19 09:04:54.140671 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.140690 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.140709 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.140741 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.140755 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.140765 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.140776 | orchestrator | 2025-02-19 09:04:54.140788 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-02-19 09:04:54.140815 | orchestrator | Wednesday 19 February 2025 08:50:16 +0000 (0:00:01.434) 0:00:43.019 **** 2025-02-19 09:04:54.140833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.140859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.140872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.140883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.140895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.140907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.140918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.140930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.140951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.140973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0573e752-03bc-434b-92ad-736ac2b2aef9', 'scsi-SQEMU_QEMU_HARDDISK_0573e752-03bc-434b-92ad-736ac2b2aef9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0573e752-03bc-434b-92ad-736ac2b2aef9-part1', 'scsi-SQEMU_QEMU_HARDDISK_0573e752-03bc-434b-92ad-736ac2b2aef9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0573e752-03bc-434b-92ad-736ac2b2aef9-part14', 'scsi-SQEMU_QEMU_HARDDISK_0573e752-03bc-434b-92ad-736ac2b2aef9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0573e752-03bc-434b-92ad-736ac2b2aef9-part15', 'scsi-SQEMU_QEMU_HARDDISK_0573e752-03bc-434b-92ad-736ac2b2aef9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0573e752-03bc-434b-92ad-736ac2b2aef9-part16', 'scsi-SQEMU_QEMU_HARDDISK_0573e752-03bc-434b-92ad-736ac2b2aef9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.140997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91d4d525-aaae-41a7-908a-2e5d882c10b9', 'scsi-SQEMU_QEMU_HARDDISK_91d4d525-aaae-41a7-908a-2e5d882c10b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_06a3a42c-cb57-4c14-955c-f9e446b3a982', 'scsi-SQEMU_QEMU_HARDDISK_06a3a42c-cb57-4c14-955c-f9e446b3a982'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdb92e8-c898-48ca-adcb-2a30d1567e49', 'scsi-SQEMU_QEMU_HARDDISK_6cdb92e8-c898-48ca-adcb-2a30d1567e49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-19-08-06-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c471650-030e-4cf1-9d5e-edaf33164d92', 'scsi-SQEMU_QEMU_HARDDISK_2c471650-030e-4cf1-9d5e-edaf33164d92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c471650-030e-4cf1-9d5e-edaf33164d92-part1', 'scsi-SQEMU_QEMU_HARDDISK_2c471650-030e-4cf1-9d5e-edaf33164d92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c471650-030e-4cf1-9d5e-edaf33164d92-part14', 'scsi-SQEMU_QEMU_HARDDISK_2c471650-030e-4cf1-9d5e-edaf33164d92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c471650-030e-4cf1-9d5e-edaf33164d92-part15', 'scsi-SQEMU_QEMU_HARDDISK_2c471650-030e-4cf1-9d5e-edaf33164d92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c471650-030e-4cf1-9d5e-edaf33164d92-part16', 'scsi-SQEMU_QEMU_HARDDISK_2c471650-030e-4cf1-9d5e-edaf33164d92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d6c08883-a737-4166-bae3-29df7aca0544', 'scsi-SQEMU_QEMU_HARDDISK_d6c08883-a737-4166-bae3-29df7aca0544'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141232 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.141244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_116ec19e-6576-4adf-ada1-59164a5d1c9f', 'scsi-SQEMU_QEMU_HARDDISK_116ec19e-6576-4adf-ada1-59164a5d1c9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae299bec-d23f-4bd0-a551-f66f5e1afde1', 'scsi-SQEMU_QEMU_HARDDISK_ae299bec-d23f-4bd0-a551-f66f5e1afde1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-19-08-06-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141366 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.141378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5676c63-b799-41fe-bf82-9c0ce222d8b3', 'scsi-SQEMU_QEMU_HARDDISK_e5676c63-b799-41fe-bf82-9c0ce222d8b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5676c63-b799-41fe-bf82-9c0ce222d8b3-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5676c63-b799-41fe-bf82-9c0ce222d8b3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5676c63-b799-41fe-bf82-9c0ce222d8b3-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5676c63-b799-41fe-bf82-9c0ce222d8b3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5676c63-b799-41fe-bf82-9c0ce222d8b3-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5676c63-b799-41fe-bf82-9c0ce222d8b3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5676c63-b799-41fe-bf82-9c0ce222d8b3-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5676c63-b799-41fe-bf82-9c0ce222d8b3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21743850-c155-402b-9a95-271bd8472759', 'scsi-SQEMU_QEMU_HARDDISK_21743850-c155-402b-9a95-271bd8472759'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ffe4904--1899--5051--bec6--9b9e5f20cdb9-osd--block--3ffe4904--1899--5051--bec6--9b9e5f20cdb9', 'dm-uuid-LVM-gEJmrdxsi8tp7oi9IAUZfPfIca8NyMwBUandMV8FWSOsUmKZVrzNIrRAkdjGNneA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbf6aa6c--a724--5ce6--b507--3cef42d33bac-osd--block--bbf6aa6c--a724--5ce6--b507--3cef42d33bac', 'dm-uuid-LVM-DkF8lbRgUBw2OMYhZSYC3Mj76Auojemo6oME7Fny7y1DaH4u423Kt0pTeLvzlyID'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420ab18e-fdcb-4974-b92c-678938c23e9b', 'scsi-SQEMU_QEMU_HARDDISK_420ab18e-fdcb-4974-b92c-678938c23e9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c11fa33-d2ef-45ea-bc93-56551b069e33', 'scsi-SQEMU_QEMU_HARDDISK_5c11fa33-d2ef-45ea-bc93-56551b069e33'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-19-08-06-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283', 'scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part1', 'scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part14', 'scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part15', 'scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part16', 'scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3ffe4904--1899--5051--bec6--9b9e5f20cdb9-osd--block--3ffe4904--1899--5051--bec6--9b9e5f20cdb9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iVQ8Cq-b326-HJuN-XZS4-gVgm-tX63-mBVLKz', 'scsi-0QEMU_QEMU_HARDDISK_0f115ae7-332f-47b5-bfba-4efd1297123a', 'scsi-SQEMU_QEMU_HARDDISK_0f115ae7-332f-47b5-bfba-4efd1297123a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bbf6aa6c--a724--5ce6--b507--3cef42d33bac-osd--block--bbf6aa6c--a724--5ce6--b507--3cef42d33bac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JJdVgF-70Dz-4p2K-3PLu-wSk9-VY4w-6NjjOp', 'scsi-0QEMU_QEMU_HARDDISK_7ac42676-4a1f-422d-9e47-87a492d5a795', 'scsi-SQEMU_QEMU_HARDDISK_7ac42676-4a1f-422d-9e47-87a492d5a795'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b50482d4-467d-4151-94c3-bb810c8ecc19', 'scsi-SQEMU_QEMU_HARDDISK_b50482d4-467d-4151-94c3-bb810c8ecc19'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-19-08-06-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141687 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.141705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--118242ed--6ea1--54c4--bfaa--1565dde441bc-osd--block--118242ed--6ea1--54c4--bfaa--1565dde441bc', 'dm-uuid-LVM-CtaUjsMi1CYgydkFoChOl7u11z3fZlUqiGHzn1OUxfSbVbGcKoMeSlSr1s4lBlXC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f77e8fc9--ceed--59c4--8328--4d335fb6ee54-osd--block--f77e8fc9--ceed--59c4--8328--4d335fb6ee54', 'dm-uuid-LVM-wWMRE2h8DeB3rvvyk4QGX6d1HblS3ppEYWzYMkr0qNhPWZkQRKJ6MBwDAXSwuCkw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141825 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.141836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6', 'scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--118242ed--6ea1--54c4--bfaa--1565dde441bc-osd--block--118242ed--6ea1--54c4--bfaa--1565dde441bc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UT9Bh1-4p8c-FCi0-Y3Pl-TrAL-cYTJ-PXESsV', 'scsi-0QEMU_QEMU_HARDDISK_923f2b44-0879-4277-a106-844be4b2565d', 'scsi-SQEMU_QEMU_HARDDISK_923f2b44-0879-4277-a106-844be4b2565d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f77e8fc9--ceed--59c4--8328--4d335fb6ee54-osd--block--f77e8fc9--ceed--59c4--8328--4d335fb6ee54'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qfbbg0-AXbj-2CBj-qHc6-VGx9-C6V6-WvY0EJ', 'scsi-0QEMU_QEMU_HARDDISK_0c5208c8-9aa1-4e87-9cdb-910770e18a0c', 'scsi-SQEMU_QEMU_HARDDISK_0c5208c8-9aa1-4e87-9cdb-910770e18a0c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69806146-708c-4195-b6c7-ec061db9d03d', 'scsi-SQEMU_QEMU_HARDDISK_69806146-708c-4195-b6c7-ec061db9d03d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45b4b457--0c8f--5565--8330--30b761ce6399-osd--block--45b4b457--0c8f--5565--8330--30b761ce6399', 'dm-uuid-LVM-FIDBVmZvPJCVlKBWyBmTxCi7nOYfujidc1htUG46sWxF2dd6J4BIKBoMeJlsWT11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-19-08-06-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.141936 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.141948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--185b0f4c--91cb--52bd--aac1--e01f69de71f3-osd--block--185b0f4c--91cb--52bd--aac1--e01f69de71f3', 'dm-uuid-LVM-b7fggpaB1M51uQSJQvACL6EVoRI0AC9FICBhcxsIn8K6v1Ar150fZTrHER4iS887'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.141982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.142004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.142051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.142066 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.142083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.142095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:04:54.142107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb', 'scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part1', 'scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part14', 'scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part15', 'scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part16', 'scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.142371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--45b4b457--0c8f--5565--8330--30b761ce6399-osd--block--45b4b457--0c8f--5565--8330--30b761ce6399'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9djOlR-QZOy-Dl1F-RkRk-saSA-evps-3BKngU', 'scsi-0QEMU_QEMU_HARDDISK_eb5d754e-727a-4983-9d71-2a65afff7a52', 'scsi-SQEMU_QEMU_HARDDISK_eb5d754e-727a-4983-9d71-2a65afff7a52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.142405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--185b0f4c--91cb--52bd--aac1--e01f69de71f3-osd--block--185b0f4c--91cb--52bd--aac1--e01f69de71f3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1vS8HN-ZjI9-2C0i-kDNF-EEJM-07Vg-ezf8O5', 'scsi-0QEMU_QEMU_HARDDISK_00a01370-945d-463a-a32d-5e52b5234eb4', 'scsi-SQEMU_QEMU_HARDDISK_00a01370-945d-463a-a32d-5e52b5234eb4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.142417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_933f95c9-b090-4d95-b9b7-90a087e62286', 'scsi-SQEMU_QEMU_HARDDISK_933f95c9-b090-4d95-b9b7-90a087e62286'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.142430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-19-08-06-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:04:54.142442 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.142453 | orchestrator | 2025-02-19 09:04:54.142465 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-02-19 09:04:54.142477 | orchestrator | Wednesday 19 February 2025 08:50:19 +0000 (0:00:02.952) 0:00:45.971 **** 2025-02-19 09:04:54.142489 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.142500 | orchestrator | 2025-02-19 09:04:54.142511 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-02-19 09:04:54.142523 | orchestrator | Wednesday 19 February 2025 08:50:19 +0000 (0:00:00.501) 0:00:46.473 **** 2025-02-19 09:04:54.142534 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.142549 | orchestrator | 2025-02-19 09:04:54.142568 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-02-19 09:04:54.142587 | orchestrator | Wednesday 19 February 2025 08:50:19 +0000 (0:00:00.243) 0:00:46.716 **** 2025-02-19 09:04:54.142606 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.142625 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.142644 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.142664 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.142683 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.142704 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.142723 | orchestrator | 2025-02-19 09:04:54.142742 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-02-19 09:04:54.142760 | orchestrator | Wednesday 19 February 2025 08:50:21 +0000 (0:00:01.109) 0:00:47.826 **** 2025-02-19 09:04:54.142780 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.142799 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.142818 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.142837 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.142856 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.142875 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.142907 | orchestrator | 2025-02-19 09:04:54.142937 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-02-19 09:04:54.142957 | orchestrator | Wednesday 19 February 2025 08:50:22 +0000 (0:00:01.692) 0:00:49.522 **** 2025-02-19 09:04:54.142978 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.142998 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.143019 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.143041 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.143060 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.143081 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.143101 | orchestrator | 2025-02-19 09:04:54.143248 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-19 09:04:54.143300 | orchestrator | Wednesday 19 February 2025 08:50:23 +0000 (0:00:01.220) 0:00:50.742 **** 2025-02-19 09:04:54.143314 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.143328 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.143465 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.143494 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.143514 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.143706 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.143739 | orchestrator | 2025-02-19 09:04:54.143756 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-19 09:04:54.143774 | orchestrator | Wednesday 19 February 2025 08:50:25 +0000 (0:00:01.429) 0:00:52.171 **** 2025-02-19 09:04:54.143790 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.143807 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.143825 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.143842 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.143861 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.143889 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.143905 | orchestrator | 2025-02-19 09:04:54.143923 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-19 09:04:54.143940 | orchestrator | Wednesday 19 February 2025 08:50:26 +0000 (0:00:00.985) 0:00:53.156 **** 2025-02-19 09:04:54.143955 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.143972 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.143989 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.144006 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.144023 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.144037 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.144047 | orchestrator | 2025-02-19 09:04:54.144057 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-19 09:04:54.144067 | orchestrator | Wednesday 19 February 2025 08:50:27 +0000 (0:00:01.608) 0:00:54.765 **** 2025-02-19 09:04:54.144078 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.144088 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.144097 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.144107 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.144117 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.144155 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.144166 | orchestrator | 2025-02-19 09:04:54.144176 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-02-19 09:04:54.144186 | orchestrator | Wednesday 19 February 2025 08:50:29 +0000 (0:00:01.318) 0:00:56.083 **** 2025-02-19 09:04:54.144197 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-19 09:04:54.144207 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-19 09:04:54.144217 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-19 09:04:54.144227 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-19 09:04:54.144238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-19 09:04:54.144248 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.144258 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-19 09:04:54.144279 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-19 09:04:54.144292 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.144309 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-19 09:04:54.144321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-19 09:04:54.144333 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-19 09:04:54.144344 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.144357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-19 09:04:54.144369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-19 09:04:54.144380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-19 09:04:54.144391 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.144402 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-19 09:04:54.144414 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-19 09:04:54.144425 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-19 09:04:54.144437 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.144448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-19 09:04:54.144460 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-19 09:04:54.144472 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.144484 | orchestrator | 2025-02-19 09:04:54.144495 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-02-19 09:04:54.144508 | orchestrator | Wednesday 19 February 2025 08:50:34 +0000 (0:00:05.585) 0:01:01.669 **** 2025-02-19 09:04:54.144519 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-19 09:04:54.144531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-19 09:04:54.144542 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-19 09:04:54.144554 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-19 09:04:54.144565 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.144577 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-19 09:04:54.144589 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-19 09:04:54.144600 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-19 09:04:54.144612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-19 09:04:54.144623 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-19 09:04:54.144635 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.144647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-19 09:04:54.144657 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-19 09:04:54.144667 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-19 09:04:54.144677 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.144687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-19 09:04:54.144697 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.144707 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-19 09:04:54.144718 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-19 09:04:54.144728 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-19 09:04:54.144825 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.144841 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-19 09:04:54.144851 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-19 09:04:54.144862 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.144872 | orchestrator | 2025-02-19 09:04:54.144882 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-02-19 09:04:54.144892 | orchestrator | Wednesday 19 February 2025 08:50:38 +0000 (0:00:03.865) 0:01:05.535 **** 2025-02-19 09:04:54.144903 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-02-19 09:04:54.144921 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-19 09:04:54.144939 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-02-19 09:04:54.144957 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-02-19 09:04:54.144975 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-02-19 09:04:54.144993 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-02-19 09:04:54.145011 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-02-19 09:04:54.145028 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-02-19 09:04:54.145047 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-19 09:04:54.145065 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-02-19 09:04:54.145077 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-02-19 09:04:54.145087 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-02-19 09:04:54.145103 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-02-19 09:04:54.145120 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-19 09:04:54.145157 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-02-19 09:04:54.145175 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-02-19 09:04:54.145193 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-02-19 09:04:54.145209 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-02-19 09:04:54.145220 | orchestrator | 2025-02-19 09:04:54.145230 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-02-19 09:04:54.145240 | orchestrator | Wednesday 19 February 2025 08:50:47 +0000 (0:00:08.691) 0:01:14.226 **** 2025-02-19 09:04:54.145250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-19 09:04:54.145261 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-19 09:04:54.145271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-19 09:04:54.145281 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.145291 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-19 09:04:54.145303 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-19 09:04:54.145320 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-19 09:04:54.145337 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.145353 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-19 09:04:54.145370 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-19 09:04:54.145394 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-19 09:04:54.145411 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.145427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-19 09:04:54.145442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-19 09:04:54.145458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-19 09:04:54.145476 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-19 09:04:54.145494 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.145512 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-19 09:04:54.145530 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-19 09:04:54.145547 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.145564 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-19 09:04:54.145582 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-19 09:04:54.145599 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-19 09:04:54.145617 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.145634 | orchestrator | 2025-02-19 09:04:54.145652 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-02-19 09:04:54.145669 | orchestrator | Wednesday 19 February 2025 08:50:49 +0000 (0:00:02.134) 0:01:16.360 **** 2025-02-19 09:04:54.145686 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-19 09:04:54.145716 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-19 09:04:54.145734 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-19 09:04:54.145751 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-19 09:04:54.145769 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-19 09:04:54.145786 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.145803 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-19 09:04:54.145820 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-19 09:04:54.145836 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.145852 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-19 09:04:54.145869 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-19 09:04:54.145886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-19 09:04:54.145903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-19 09:04:54.145920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-19 09:04:54.145936 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.145953 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-19 09:04:54.145970 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.146189 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-19 09:04:54.146219 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-19 09:04:54.146237 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.146253 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-19 09:04:54.146270 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-19 09:04:54.146287 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-19 09:04:54.146304 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.146321 | orchestrator | 2025-02-19 09:04:54.146339 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-02-19 09:04:54.146356 | orchestrator | Wednesday 19 February 2025 08:50:50 +0000 (0:00:01.350) 0:01:17.711 **** 2025-02-19 09:04:54.146372 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-02-19 09:04:54.146390 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-19 09:04:54.146407 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-19 09:04:54.146434 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-19 09:04:54.146450 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-02-19 09:04:54.146467 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-19 09:04:54.146483 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-19 09:04:54.146500 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-19 09:04:54.146516 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-02-19 09:04:54.146532 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-19 09:04:54.146548 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-19 09:04:54.146564 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-19 09:04:54.146580 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-19 09:04:54.146598 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-19 09:04:54.146628 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-19 09:04:54.146645 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.146661 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.146677 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-19 09:04:54.146694 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-19 09:04:54.146710 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-19 09:04:54.146727 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.146744 | orchestrator | 2025-02-19 09:04:54.146760 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-02-19 09:04:54.146779 | orchestrator | Wednesday 19 February 2025 08:50:52 +0000 (0:00:02.025) 0:01:19.736 **** 2025-02-19 09:04:54.146797 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.146814 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.146831 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.146848 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.146865 | orchestrator | 2025-02-19 09:04:54.146883 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-19 09:04:54.146902 | orchestrator | Wednesday 19 February 2025 08:50:55 +0000 (0:00:02.350) 0:01:22.087 **** 2025-02-19 09:04:54.146918 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.146935 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.146953 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.146971 | orchestrator | 2025-02-19 09:04:54.146988 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-19 09:04:54.147003 | orchestrator | Wednesday 19 February 2025 08:50:56 +0000 (0:00:01.112) 0:01:23.199 **** 2025-02-19 09:04:54.147015 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.147026 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.147038 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.147049 | orchestrator | 2025-02-19 09:04:54.147062 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-19 09:04:54.147073 | orchestrator | Wednesday 19 February 2025 08:50:57 +0000 (0:00:01.221) 0:01:24.421 **** 2025-02-19 09:04:54.147085 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.147097 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.147117 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.147193 | orchestrator | 2025-02-19 09:04:54.147209 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-19 09:04:54.147220 | orchestrator | Wednesday 19 February 2025 08:50:58 +0000 (0:00:00.883) 0:01:25.304 **** 2025-02-19 09:04:54.147231 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.147242 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.147252 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.147262 | orchestrator | 2025-02-19 09:04:54.147273 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-19 09:04:54.147392 | orchestrator | Wednesday 19 February 2025 08:51:00 +0000 (0:00:01.565) 0:01:26.870 **** 2025-02-19 09:04:54.147417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.147435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.147453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.147471 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.147488 | orchestrator | 2025-02-19 09:04:54.147508 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-19 09:04:54.147525 | orchestrator | Wednesday 19 February 2025 08:51:01 +0000 (0:00:01.279) 0:01:28.149 **** 2025-02-19 09:04:54.147542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.147580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.147599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.147618 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.147637 | orchestrator | 2025-02-19 09:04:54.147656 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-19 09:04:54.147675 | orchestrator | Wednesday 19 February 2025 08:51:02 +0000 (0:00:01.282) 0:01:29.431 **** 2025-02-19 09:04:54.147693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.147710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.147728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.147744 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.147763 | orchestrator | 2025-02-19 09:04:54.147781 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.147799 | orchestrator | Wednesday 19 February 2025 08:51:03 +0000 (0:00:01.107) 0:01:30.538 **** 2025-02-19 09:04:54.147818 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.147837 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.147856 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.147875 | orchestrator | 2025-02-19 09:04:54.147894 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-19 09:04:54.147912 | orchestrator | Wednesday 19 February 2025 08:51:05 +0000 (0:00:01.310) 0:01:31.849 **** 2025-02-19 09:04:54.147931 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-19 09:04:54.147949 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-19 09:04:54.147969 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-19 09:04:54.147988 | orchestrator | 2025-02-19 09:04:54.148008 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-19 09:04:54.148028 | orchestrator | Wednesday 19 February 2025 08:51:06 +0000 (0:00:01.867) 0:01:33.717 **** 2025-02-19 09:04:54.148048 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.148066 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.148086 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.148105 | orchestrator | 2025-02-19 09:04:54.148147 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.148169 | orchestrator | Wednesday 19 February 2025 08:51:07 +0000 (0:00:00.706) 0:01:34.423 **** 2025-02-19 09:04:54.148189 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.148207 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.148225 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.148244 | orchestrator | 2025-02-19 09:04:54.148262 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-19 09:04:54.148281 | orchestrator | Wednesday 19 February 2025 08:51:08 +0000 (0:00:01.279) 0:01:35.703 **** 2025-02-19 09:04:54.148300 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.148318 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.148337 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.148355 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.148373 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.148393 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.148413 | orchestrator | 2025-02-19 09:04:54.148431 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-19 09:04:54.148449 | orchestrator | Wednesday 19 February 2025 08:51:09 +0000 (0:00:00.722) 0:01:36.426 **** 2025-02-19 09:04:54.148469 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.148487 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.148505 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.148523 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.148554 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.148572 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.148591 | orchestrator | 2025-02-19 09:04:54.148612 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-19 09:04:54.148632 | orchestrator | Wednesday 19 February 2025 08:51:10 +0000 (0:00:00.984) 0:01:37.410 **** 2025-02-19 09:04:54.148653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.148672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.148691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.148711 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-19 09:04:54.148731 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.148752 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-19 09:04:54.148772 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-19 09:04:54.148791 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-19 09:04:54.148811 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.148830 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-19 09:04:54.148989 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-19 09:04:54.149020 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.149039 | orchestrator | 2025-02-19 09:04:54.149058 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-02-19 09:04:54.149077 | orchestrator | Wednesday 19 February 2025 08:51:11 +0000 (0:00:01.346) 0:01:38.757 **** 2025-02-19 09:04:54.149095 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.149112 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.149157 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.149174 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.149191 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.149208 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.149222 | orchestrator | 2025-02-19 09:04:54.149236 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-02-19 09:04:54.149250 | orchestrator | Wednesday 19 February 2025 08:51:12 +0000 (0:00:01.016) 0:01:39.774 **** 2025-02-19 09:04:54.149264 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-19 09:04:54.149279 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-19 09:04:54.149292 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-19 09:04:54.149306 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-02-19 09:04:54.149320 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-19 09:04:54.149334 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-19 09:04:54.149347 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-19 09:04:54.149361 | orchestrator | 2025-02-19 09:04:54.149386 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-02-19 09:04:54.149401 | orchestrator | Wednesday 19 February 2025 08:51:13 +0000 (0:00:00.963) 0:01:40.737 **** 2025-02-19 09:04:54.149415 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-19 09:04:54.149429 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-19 09:04:54.149443 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-19 09:04:54.149457 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-02-19 09:04:54.149470 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-19 09:04:54.149484 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-19 09:04:54.149510 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-19 09:04:54.149524 | orchestrator | 2025-02-19 09:04:54.149538 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-19 09:04:54.149551 | orchestrator | Wednesday 19 February 2025 08:51:16 +0000 (0:00:02.727) 0:01:43.465 **** 2025-02-19 09:04:54.149566 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.149580 | orchestrator | 2025-02-19 09:04:54.149595 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-19 09:04:54.149609 | orchestrator | Wednesday 19 February 2025 08:51:18 +0000 (0:00:01.729) 0:01:45.194 **** 2025-02-19 09:04:54.149622 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.149636 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.149650 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.149666 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.149681 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.149697 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.149712 | orchestrator | 2025-02-19 09:04:54.149725 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-19 09:04:54.149740 | orchestrator | Wednesday 19 February 2025 08:51:19 +0000 (0:00:01.215) 0:01:46.410 **** 2025-02-19 09:04:54.149755 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.149769 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.149783 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.149798 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.149811 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.149826 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.149840 | orchestrator | 2025-02-19 09:04:54.149854 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-19 09:04:54.149868 | orchestrator | Wednesday 19 February 2025 08:51:21 +0000 (0:00:02.121) 0:01:48.531 **** 2025-02-19 09:04:54.149882 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.149892 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.149903 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.149912 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.149922 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.149939 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.149953 | orchestrator | 2025-02-19 09:04:54.149967 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-19 09:04:54.149980 | orchestrator | Wednesday 19 February 2025 08:51:24 +0000 (0:00:02.318) 0:01:50.850 **** 2025-02-19 09:04:54.149994 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.150008 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.150054 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.150069 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.150083 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.150097 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.150111 | orchestrator | 2025-02-19 09:04:54.150148 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-19 09:04:54.150162 | orchestrator | Wednesday 19 February 2025 08:51:26 +0000 (0:00:02.284) 0:01:53.134 **** 2025-02-19 09:04:54.150176 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.150191 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.150323 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.150340 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.150348 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.150357 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.150365 | orchestrator | 2025-02-19 09:04:54.150374 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-19 09:04:54.150383 | orchestrator | Wednesday 19 February 2025 08:51:27 +0000 (0:00:00.965) 0:01:54.100 **** 2025-02-19 09:04:54.150391 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.150410 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.150418 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.150427 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.150436 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.150445 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.150453 | orchestrator | 2025-02-19 09:04:54.150462 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-19 09:04:54.150471 | orchestrator | Wednesday 19 February 2025 08:51:28 +0000 (0:00:00.964) 0:01:55.064 **** 2025-02-19 09:04:54.150479 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.150488 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.150496 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.150505 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.150513 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.150522 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.150531 | orchestrator | 2025-02-19 09:04:54.150539 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-19 09:04:54.150548 | orchestrator | Wednesday 19 February 2025 08:51:28 +0000 (0:00:00.644) 0:01:55.709 **** 2025-02-19 09:04:54.150556 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.150565 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.150573 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.150582 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.150590 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.150599 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.150608 | orchestrator | 2025-02-19 09:04:54.150616 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-19 09:04:54.150625 | orchestrator | Wednesday 19 February 2025 08:51:29 +0000 (0:00:00.898) 0:01:56.608 **** 2025-02-19 09:04:54.150633 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.150642 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.150650 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.150659 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.150667 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.150678 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.150694 | orchestrator | 2025-02-19 09:04:54.150710 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-19 09:04:54.150725 | orchestrator | Wednesday 19 February 2025 08:51:30 +0000 (0:00:00.769) 0:01:57.378 **** 2025-02-19 09:04:54.150740 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.150755 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.150771 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.150786 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.150796 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.150805 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.150814 | orchestrator | 2025-02-19 09:04:54.150822 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-19 09:04:54.150837 | orchestrator | Wednesday 19 February 2025 08:51:31 +0000 (0:00:01.021) 0:01:58.399 **** 2025-02-19 09:04:54.150846 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.150855 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.150863 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.150872 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.150880 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.150888 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.150897 | orchestrator | 2025-02-19 09:04:54.150905 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-19 09:04:54.150914 | orchestrator | Wednesday 19 February 2025 08:51:33 +0000 (0:00:01.465) 0:01:59.865 **** 2025-02-19 09:04:54.150923 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.150933 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.150943 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.150958 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.150978 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.150988 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.150998 | orchestrator | 2025-02-19 09:04:54.151008 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-19 09:04:54.151017 | orchestrator | Wednesday 19 February 2025 08:51:34 +0000 (0:00:01.596) 0:02:01.462 **** 2025-02-19 09:04:54.151027 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.151037 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.151046 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.151056 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.151065 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.151075 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.151085 | orchestrator | 2025-02-19 09:04:54.151094 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-19 09:04:54.151104 | orchestrator | Wednesday 19 February 2025 08:51:35 +0000 (0:00:01.108) 0:02:02.570 **** 2025-02-19 09:04:54.151114 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.151143 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.151159 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.151238 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.151253 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.151266 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.151280 | orchestrator | 2025-02-19 09:04:54.151294 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-19 09:04:54.151307 | orchestrator | Wednesday 19 February 2025 08:51:37 +0000 (0:00:01.531) 0:02:04.102 **** 2025-02-19 09:04:54.151320 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.151334 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.151348 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.151361 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.151375 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.151390 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.151405 | orchestrator | 2025-02-19 09:04:54.151419 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-19 09:04:54.151535 | orchestrator | Wednesday 19 February 2025 08:51:38 +0000 (0:00:01.064) 0:02:05.167 **** 2025-02-19 09:04:54.151550 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.151559 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.151568 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.151576 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.151585 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.151593 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.151602 | orchestrator | 2025-02-19 09:04:54.151610 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-19 09:04:54.151619 | orchestrator | Wednesday 19 February 2025 08:51:39 +0000 (0:00:01.112) 0:02:06.279 **** 2025-02-19 09:04:54.151627 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.151636 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.151644 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.151653 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.151661 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.151670 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.151678 | orchestrator | 2025-02-19 09:04:54.151687 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-19 09:04:54.151696 | orchestrator | Wednesday 19 February 2025 08:51:40 +0000 (0:00:00.671) 0:02:06.950 **** 2025-02-19 09:04:54.151704 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.151712 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.151721 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.151730 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.151738 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.151747 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.151755 | orchestrator | 2025-02-19 09:04:54.151764 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-19 09:04:54.151790 | orchestrator | Wednesday 19 February 2025 08:51:41 +0000 (0:00:01.006) 0:02:07.956 **** 2025-02-19 09:04:54.151805 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.151820 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.151834 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.151848 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.151863 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.151878 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.151893 | orchestrator | 2025-02-19 09:04:54.151907 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-19 09:04:54.151964 | orchestrator | Wednesday 19 February 2025 08:51:41 +0000 (0:00:00.713) 0:02:08.669 **** 2025-02-19 09:04:54.151983 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.151997 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.152012 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.152026 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.152041 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.152060 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.152075 | orchestrator | 2025-02-19 09:04:54.152090 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-19 09:04:54.152104 | orchestrator | Wednesday 19 February 2025 08:51:42 +0000 (0:00:00.928) 0:02:09.598 **** 2025-02-19 09:04:54.152120 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.152193 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.152207 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.152222 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.152237 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.152252 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.152267 | orchestrator | 2025-02-19 09:04:54.152281 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-19 09:04:54.152296 | orchestrator | Wednesday 19 February 2025 08:51:43 +0000 (0:00:00.769) 0:02:10.368 **** 2025-02-19 09:04:54.152311 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.152324 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.152339 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.152352 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.152366 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.152381 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.152397 | orchestrator | 2025-02-19 09:04:54.152411 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-19 09:04:54.152427 | orchestrator | Wednesday 19 February 2025 08:51:44 +0000 (0:00:00.918) 0:02:11.287 **** 2025-02-19 09:04:54.152443 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.152458 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.152473 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.152489 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.152503 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.152518 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.152532 | orchestrator | 2025-02-19 09:04:54.152546 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-19 09:04:54.152559 | orchestrator | Wednesday 19 February 2025 08:51:45 +0000 (0:00:00.745) 0:02:12.032 **** 2025-02-19 09:04:54.152573 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.152587 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.152601 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.152615 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.152628 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.152642 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.152656 | orchestrator | 2025-02-19 09:04:54.152670 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-19 09:04:54.152685 | orchestrator | Wednesday 19 February 2025 08:51:46 +0000 (0:00:00.999) 0:02:13.031 **** 2025-02-19 09:04:54.152699 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.152713 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.152739 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.152753 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.152767 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.152782 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.152796 | orchestrator | 2025-02-19 09:04:54.152811 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-19 09:04:54.152827 | orchestrator | Wednesday 19 February 2025 08:51:46 +0000 (0:00:00.722) 0:02:13.754 **** 2025-02-19 09:04:54.152841 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.152856 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.152871 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.152886 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.152900 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.152915 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.152930 | orchestrator | 2025-02-19 09:04:54.153070 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-19 09:04:54.153100 | orchestrator | Wednesday 19 February 2025 08:51:47 +0000 (0:00:00.977) 0:02:14.732 **** 2025-02-19 09:04:54.153116 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.153151 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.153166 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.153181 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.153195 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.153210 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.153225 | orchestrator | 2025-02-19 09:04:54.153241 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-19 09:04:54.153256 | orchestrator | Wednesday 19 February 2025 08:51:48 +0000 (0:00:00.776) 0:02:15.508 **** 2025-02-19 09:04:54.153271 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.153285 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.153298 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.153312 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.153325 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.153339 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.153352 | orchestrator | 2025-02-19 09:04:54.153367 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-19 09:04:54.153381 | orchestrator | Wednesday 19 February 2025 08:51:49 +0000 (0:00:01.022) 0:02:16.530 **** 2025-02-19 09:04:54.153395 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.153409 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.153434 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.153448 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.153461 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.153475 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.153487 | orchestrator | 2025-02-19 09:04:54.153501 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-19 09:04:54.153515 | orchestrator | Wednesday 19 February 2025 08:51:50 +0000 (0:00:00.718) 0:02:17.249 **** 2025-02-19 09:04:54.153529 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.153543 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.153557 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.153572 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.153587 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.153602 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.153617 | orchestrator | 2025-02-19 09:04:54.153633 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-19 09:04:54.153647 | orchestrator | Wednesday 19 February 2025 08:51:51 +0000 (0:00:01.147) 0:02:18.396 **** 2025-02-19 09:04:54.153663 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.153677 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.153692 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.153720 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.153737 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.153752 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.153767 | orchestrator | 2025-02-19 09:04:54.153783 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-19 09:04:54.153797 | orchestrator | Wednesday 19 February 2025 08:51:52 +0000 (0:00:01.142) 0:02:19.539 **** 2025-02-19 09:04:54.153812 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.153827 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.153842 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.153857 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.153873 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.153887 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.153902 | orchestrator | 2025-02-19 09:04:54.153916 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-19 09:04:54.153932 | orchestrator | Wednesday 19 February 2025 08:51:54 +0000 (0:00:01.599) 0:02:21.139 **** 2025-02-19 09:04:54.153945 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-19 09:04:54.153960 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-19 09:04:54.153976 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.153991 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-19 09:04:54.154010 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-19 09:04:54.154059 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.154074 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-19 09:04:54.154088 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-19 09:04:54.154103 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.154117 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.154202 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.154218 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.154234 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.154250 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.154264 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.154278 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.154293 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.154307 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.154322 | orchestrator | 2025-02-19 09:04:54.154337 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-19 09:04:54.154354 | orchestrator | Wednesday 19 February 2025 08:51:55 +0000 (0:00:01.099) 0:02:22.238 **** 2025-02-19 09:04:54.154369 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-19 09:04:54.154384 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-19 09:04:54.154398 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.154413 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-19 09:04:54.154428 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-19 09:04:54.154441 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.154455 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-19 09:04:54.154471 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-19 09:04:54.154607 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-19 09:04:54.154630 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-19 09:04:54.154645 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.154661 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-19 09:04:54.154676 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-19 09:04:54.154689 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.154703 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.154717 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-19 09:04:54.154743 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-19 09:04:54.154756 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.154771 | orchestrator | 2025-02-19 09:04:54.154785 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-19 09:04:54.154800 | orchestrator | Wednesday 19 February 2025 08:51:56 +0000 (0:00:01.347) 0:02:23.586 **** 2025-02-19 09:04:54.154815 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.154829 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.154844 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.154858 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.154874 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.154888 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.154904 | orchestrator | 2025-02-19 09:04:54.154919 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-19 09:04:54.154934 | orchestrator | Wednesday 19 February 2025 08:51:57 +0000 (0:00:01.135) 0:02:24.722 **** 2025-02-19 09:04:54.154949 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.154965 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.154978 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.154992 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.155006 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.155020 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.155035 | orchestrator | 2025-02-19 09:04:54.155049 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-19 09:04:54.155064 | orchestrator | Wednesday 19 February 2025 08:51:59 +0000 (0:00:01.423) 0:02:26.146 **** 2025-02-19 09:04:54.155079 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.155094 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.155109 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.155149 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.155165 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.155188 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.155203 | orchestrator | 2025-02-19 09:04:54.155220 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-19 09:04:54.155236 | orchestrator | Wednesday 19 February 2025 08:52:00 +0000 (0:00:00.869) 0:02:27.015 **** 2025-02-19 09:04:54.155251 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.155266 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.155282 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.155298 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.155314 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.155330 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.155346 | orchestrator | 2025-02-19 09:04:54.155362 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-19 09:04:54.155376 | orchestrator | Wednesday 19 February 2025 08:52:01 +0000 (0:00:01.115) 0:02:28.131 **** 2025-02-19 09:04:54.155391 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.155403 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.155417 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.155429 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.155443 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.155456 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.155469 | orchestrator | 2025-02-19 09:04:54.155483 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-19 09:04:54.155498 | orchestrator | Wednesday 19 February 2025 08:52:02 +0000 (0:00:00.756) 0:02:28.887 **** 2025-02-19 09:04:54.155513 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.155528 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.155542 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.155557 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.155571 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.155596 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.155611 | orchestrator | 2025-02-19 09:04:54.155624 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-19 09:04:54.155637 | orchestrator | Wednesday 19 February 2025 08:52:03 +0000 (0:00:01.005) 0:02:29.893 **** 2025-02-19 09:04:54.155649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.155663 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.155677 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.155690 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.155704 | orchestrator | 2025-02-19 09:04:54.155718 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-19 09:04:54.155731 | orchestrator | Wednesday 19 February 2025 08:52:03 +0000 (0:00:00.517) 0:02:30.411 **** 2025-02-19 09:04:54.155745 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.155758 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.155773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.155787 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.155800 | orchestrator | 2025-02-19 09:04:54.155814 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-19 09:04:54.155826 | orchestrator | Wednesday 19 February 2025 08:52:04 +0000 (0:00:00.673) 0:02:31.085 **** 2025-02-19 09:04:54.155839 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.155853 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.155867 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.155992 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.156012 | orchestrator | 2025-02-19 09:04:54.156027 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.156039 | orchestrator | Wednesday 19 February 2025 08:52:04 +0000 (0:00:00.556) 0:02:31.641 **** 2025-02-19 09:04:54.156052 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.156065 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.156078 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.156090 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.156102 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.156116 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.156148 | orchestrator | 2025-02-19 09:04:54.156163 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-19 09:04:54.156177 | orchestrator | Wednesday 19 February 2025 08:52:05 +0000 (0:00:00.831) 0:02:32.473 **** 2025-02-19 09:04:54.156191 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-19 09:04:54.156214 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-19 09:04:54.156228 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.156242 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.156254 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-19 09:04:54.156267 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.156280 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.156292 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.156305 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.156318 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.156330 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.156342 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.156354 | orchestrator | 2025-02-19 09:04:54.156368 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-19 09:04:54.156381 | orchestrator | Wednesday 19 February 2025 08:52:07 +0000 (0:00:02.048) 0:02:34.522 **** 2025-02-19 09:04:54.156393 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.156405 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.156417 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.156443 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.156456 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.156468 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.156480 | orchestrator | 2025-02-19 09:04:54.156493 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.156506 | orchestrator | Wednesday 19 February 2025 08:52:08 +0000 (0:00:01.238) 0:02:35.760 **** 2025-02-19 09:04:54.156519 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.156532 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.156544 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.156557 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.156568 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.156576 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.156584 | orchestrator | 2025-02-19 09:04:54.156592 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-19 09:04:54.156603 | orchestrator | Wednesday 19 February 2025 08:52:10 +0000 (0:00:01.153) 0:02:36.914 **** 2025-02-19 09:04:54.156611 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-19 09:04:54.156621 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.156630 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-19 09:04:54.156639 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-19 09:04:54.156648 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.156658 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.156667 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.156676 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.156684 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.156694 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.156703 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.156712 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.156729 | orchestrator | 2025-02-19 09:04:54.156739 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-19 09:04:54.156748 | orchestrator | Wednesday 19 February 2025 08:52:11 +0000 (0:00:01.401) 0:02:38.315 **** 2025-02-19 09:04:54.156757 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.156766 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.156776 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.156786 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.156795 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.156808 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.156818 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.156827 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.156837 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.156846 | orchestrator | 2025-02-19 09:04:54.156855 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-19 09:04:54.156864 | orchestrator | Wednesday 19 February 2025 08:52:12 +0000 (0:00:01.312) 0:02:39.627 **** 2025-02-19 09:04:54.156873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.156882 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.156892 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.156901 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.156910 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-19 09:04:54.156919 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-19 09:04:54.156931 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-19 09:04:54.156940 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.156950 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-19 09:04:54.157055 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-19 09:04:54.157068 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-19 09:04:54.157076 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.157084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.157092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.157100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.157108 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-19 09:04:54.157116 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-19 09:04:54.157169 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-19 09:04:54.157179 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.157187 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.157195 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-19 09:04:54.157203 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-19 09:04:54.157211 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-19 09:04:54.157219 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.157227 | orchestrator | 2025-02-19 09:04:54.157235 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-19 09:04:54.157243 | orchestrator | Wednesday 19 February 2025 08:52:16 +0000 (0:00:03.194) 0:02:42.822 **** 2025-02-19 09:04:54.157252 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.157260 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.157268 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.157276 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.157284 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.157291 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.157299 | orchestrator | 2025-02-19 09:04:54.157308 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-19 09:04:54.157316 | orchestrator | Wednesday 19 February 2025 08:52:17 +0000 (0:00:01.844) 0:02:44.666 **** 2025-02-19 09:04:54.157324 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.157332 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.157340 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.157348 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-19 09:04:54.157356 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.157364 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-19 09:04:54.157372 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.157380 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-19 09:04:54.157388 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.157396 | orchestrator | 2025-02-19 09:04:54.157404 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-19 09:04:54.157412 | orchestrator | Wednesday 19 February 2025 08:52:19 +0000 (0:00:01.592) 0:02:46.259 **** 2025-02-19 09:04:54.157420 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.157428 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.157436 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.157444 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.157452 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.157460 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.157468 | orchestrator | 2025-02-19 09:04:54.157476 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-19 09:04:54.157484 | orchestrator | Wednesday 19 February 2025 08:52:20 +0000 (0:00:01.316) 0:02:47.575 **** 2025-02-19 09:04:54.157492 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.157500 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.157508 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.157516 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.157530 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.157538 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.157546 | orchestrator | 2025-02-19 09:04:54.157554 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-02-19 09:04:54.157561 | orchestrator | Wednesday 19 February 2025 08:52:22 +0000 (0:00:01.550) 0:02:49.126 **** 2025-02-19 09:04:54.157569 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.157577 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.157585 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.157593 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.157601 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.157609 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.157617 | orchestrator | 2025-02-19 09:04:54.157625 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-02-19 09:04:54.157633 | orchestrator | Wednesday 19 February 2025 08:52:24 +0000 (0:00:01.776) 0:02:50.902 **** 2025-02-19 09:04:54.157641 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.157648 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.157656 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.157664 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.157673 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.157682 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.157692 | orchestrator | 2025-02-19 09:04:54.157701 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-02-19 09:04:54.157710 | orchestrator | Wednesday 19 February 2025 08:52:27 +0000 (0:00:03.018) 0:02:53.921 **** 2025-02-19 09:04:54.157720 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.157730 | orchestrator | 2025-02-19 09:04:54.157739 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-02-19 09:04:54.157748 | orchestrator | Wednesday 19 February 2025 08:52:28 +0000 (0:00:01.627) 0:02:55.548 **** 2025-02-19 09:04:54.157757 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.157766 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.157784 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.157798 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.157811 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.157824 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.157837 | orchestrator | 2025-02-19 09:04:54.157924 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-02-19 09:04:54.157939 | orchestrator | Wednesday 19 February 2025 08:52:30 +0000 (0:00:01.289) 0:02:56.837 **** 2025-02-19 09:04:54.157949 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.157958 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.157967 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.157976 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.157986 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.157995 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.158004 | orchestrator | 2025-02-19 09:04:54.158013 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-02-19 09:04:54.158044 | orchestrator | Wednesday 19 February 2025 08:52:31 +0000 (0:00:01.076) 0:02:57.914 **** 2025-02-19 09:04:54.158052 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-19 09:04:54.158061 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-19 09:04:54.158073 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-19 09:04:54.158082 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-19 09:04:54.158090 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-19 09:04:54.158097 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-19 09:04:54.158116 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-19 09:04:54.158145 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-19 09:04:54.158156 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-19 09:04:54.158164 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-19 09:04:54.158172 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-19 09:04:54.158180 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-19 09:04:54.158188 | orchestrator | 2025-02-19 09:04:54.158196 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-02-19 09:04:54.158204 | orchestrator | Wednesday 19 February 2025 08:52:33 +0000 (0:00:02.416) 0:03:00.331 **** 2025-02-19 09:04:54.158212 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.158220 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.158228 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.158236 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.158244 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.158252 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.158259 | orchestrator | 2025-02-19 09:04:54.158267 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-02-19 09:04:54.158275 | orchestrator | Wednesday 19 February 2025 08:52:35 +0000 (0:00:01.502) 0:03:01.833 **** 2025-02-19 09:04:54.158283 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.158291 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.158299 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.158307 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.158315 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.158323 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.158331 | orchestrator | 2025-02-19 09:04:54.158339 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-02-19 09:04:54.158346 | orchestrator | Wednesday 19 February 2025 08:52:36 +0000 (0:00:01.308) 0:03:03.142 **** 2025-02-19 09:04:54.158354 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.158362 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.158370 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.158377 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.158387 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.158401 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.158414 | orchestrator | 2025-02-19 09:04:54.158427 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-02-19 09:04:54.158441 | orchestrator | Wednesday 19 February 2025 08:52:37 +0000 (0:00:01.141) 0:03:04.284 **** 2025-02-19 09:04:54.158454 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.158468 | orchestrator | 2025-02-19 09:04:54.158481 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:quincy image] *** 2025-02-19 09:04:54.158495 | orchestrator | Wednesday 19 February 2025 08:52:39 +0000 (0:00:01.865) 0:03:06.149 **** 2025-02-19 09:04:54.158504 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.158511 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.158519 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.158527 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.158535 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.158542 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.158550 | orchestrator | 2025-02-19 09:04:54.158558 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-02-19 09:04:54.158566 | orchestrator | Wednesday 19 February 2025 08:53:17 +0000 (0:00:38.279) 0:03:44.429 **** 2025-02-19 09:04:54.158574 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-19 09:04:54.158587 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-19 09:04:54.158596 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-19 09:04:54.158603 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.158612 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-19 09:04:54.158620 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-19 09:04:54.158688 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-19 09:04:54.158699 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.158707 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-19 09:04:54.158720 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-19 09:04:54.158729 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-19 09:04:54.158737 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.158748 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-19 09:04:54.158756 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-19 09:04:54.158764 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-19 09:04:54.158772 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.158780 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-19 09:04:54.158788 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-19 09:04:54.158796 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-19 09:04:54.158804 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.158811 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-19 09:04:54.158820 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-19 09:04:54.158827 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-19 09:04:54.158836 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.158844 | orchestrator | 2025-02-19 09:04:54.158852 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-02-19 09:04:54.158860 | orchestrator | Wednesday 19 February 2025 08:53:18 +0000 (0:00:01.123) 0:03:45.552 **** 2025-02-19 09:04:54.158867 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.158876 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.158884 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.158892 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.158900 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.158908 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.158916 | orchestrator | 2025-02-19 09:04:54.158924 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-02-19 09:04:54.158932 | orchestrator | Wednesday 19 February 2025 08:53:19 +0000 (0:00:00.720) 0:03:46.272 **** 2025-02-19 09:04:54.158940 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.158948 | orchestrator | 2025-02-19 09:04:54.158956 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-02-19 09:04:54.158964 | orchestrator | Wednesday 19 February 2025 08:53:19 +0000 (0:00:00.196) 0:03:46.469 **** 2025-02-19 09:04:54.158972 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.158980 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.158988 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.158996 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.159007 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.159015 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.159023 | orchestrator | 2025-02-19 09:04:54.159031 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-02-19 09:04:54.159044 | orchestrator | Wednesday 19 February 2025 08:53:20 +0000 (0:00:00.929) 0:03:47.399 **** 2025-02-19 09:04:54.159052 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.159060 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.159068 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.159076 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.159084 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.159092 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.159100 | orchestrator | 2025-02-19 09:04:54.159109 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-02-19 09:04:54.159116 | orchestrator | Wednesday 19 February 2025 08:53:21 +0000 (0:00:00.832) 0:03:48.231 **** 2025-02-19 09:04:54.159171 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.159181 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.159188 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.159196 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.159204 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.159212 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.159220 | orchestrator | 2025-02-19 09:04:54.159228 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-02-19 09:04:54.159236 | orchestrator | Wednesday 19 February 2025 08:53:22 +0000 (0:00:00.968) 0:03:49.199 **** 2025-02-19 09:04:54.159244 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.159252 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.159260 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.159268 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.159276 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.159283 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.159292 | orchestrator | 2025-02-19 09:04:54.159299 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-02-19 09:04:54.159307 | orchestrator | Wednesday 19 February 2025 08:53:23 +0000 (0:00:01.473) 0:03:50.673 **** 2025-02-19 09:04:54.159315 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.159324 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.159333 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.159343 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.159352 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.159361 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.159369 | orchestrator | 2025-02-19 09:04:54.159378 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-02-19 09:04:54.159392 | orchestrator | Wednesday 19 February 2025 08:53:24 +0000 (0:00:00.806) 0:03:51.480 **** 2025-02-19 09:04:54.159408 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.159423 | orchestrator | 2025-02-19 09:04:54.159508 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-02-19 09:04:54.159529 | orchestrator | Wednesday 19 February 2025 08:53:26 +0000 (0:00:01.681) 0:03:53.161 **** 2025-02-19 09:04:54.159539 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.159548 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.159558 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.159567 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.159576 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.159585 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.159594 | orchestrator | 2025-02-19 09:04:54.159603 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-02-19 09:04:54.159613 | orchestrator | Wednesday 19 February 2025 08:53:27 +0000 (0:00:00.846) 0:03:54.007 **** 2025-02-19 09:04:54.159622 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.159631 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.159640 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.159649 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.159658 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.159674 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.159682 | orchestrator | 2025-02-19 09:04:54.159690 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-02-19 09:04:54.159698 | orchestrator | Wednesday 19 February 2025 08:53:28 +0000 (0:00:01.041) 0:03:55.048 **** 2025-02-19 09:04:54.159706 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.159713 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.159720 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.159727 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.159734 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.159741 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.159748 | orchestrator | 2025-02-19 09:04:54.159755 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-02-19 09:04:54.159766 | orchestrator | Wednesday 19 February 2025 08:53:28 +0000 (0:00:00.714) 0:03:55.763 **** 2025-02-19 09:04:54.159773 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.159780 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.159787 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.159794 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.159806 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.159817 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.159828 | orchestrator | 2025-02-19 09:04:54.159840 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-02-19 09:04:54.159852 | orchestrator | Wednesday 19 February 2025 08:53:29 +0000 (0:00:00.827) 0:03:56.590 **** 2025-02-19 09:04:54.159861 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.159872 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.159879 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.159886 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.159893 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.159900 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.159907 | orchestrator | 2025-02-19 09:04:54.159914 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-02-19 09:04:54.159921 | orchestrator | Wednesday 19 February 2025 08:53:30 +0000 (0:00:00.796) 0:03:57.387 **** 2025-02-19 09:04:54.159928 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.159935 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.159942 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.159948 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.159955 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.159962 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.159969 | orchestrator | 2025-02-19 09:04:54.159976 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-02-19 09:04:54.159983 | orchestrator | Wednesday 19 February 2025 08:53:31 +0000 (0:00:00.949) 0:03:58.336 **** 2025-02-19 09:04:54.159990 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.159997 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.160003 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.160010 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.160017 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.160024 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.160031 | orchestrator | 2025-02-19 09:04:54.160038 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-02-19 09:04:54.160044 | orchestrator | Wednesday 19 February 2025 08:53:32 +0000 (0:00:00.694) 0:03:59.030 **** 2025-02-19 09:04:54.160051 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.160058 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.160069 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.160080 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.160092 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.160102 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.160114 | orchestrator | 2025-02-19 09:04:54.160139 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-19 09:04:54.160161 | orchestrator | Wednesday 19 February 2025 08:53:33 +0000 (0:00:01.717) 0:04:00.748 **** 2025-02-19 09:04:54.160174 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.160186 | orchestrator | 2025-02-19 09:04:54.160199 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-02-19 09:04:54.160211 | orchestrator | Wednesday 19 February 2025 08:53:35 +0000 (0:00:01.515) 0:04:02.263 **** 2025-02-19 09:04:54.160224 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-02-19 09:04:54.160238 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-02-19 09:04:54.160251 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-02-19 09:04:54.160262 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-02-19 09:04:54.160275 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-02-19 09:04:54.160288 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-02-19 09:04:54.160300 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-02-19 09:04:54.160313 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-02-19 09:04:54.160384 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-02-19 09:04:54.160400 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-02-19 09:04:54.160413 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-02-19 09:04:54.160424 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-02-19 09:04:54.160435 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-02-19 09:04:54.160446 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-02-19 09:04:54.160456 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-02-19 09:04:54.160463 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-02-19 09:04:54.160470 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-02-19 09:04:54.160477 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-02-19 09:04:54.160484 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-02-19 09:04:54.160491 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-02-19 09:04:54.160498 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-02-19 09:04:54.160505 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-02-19 09:04:54.160512 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-02-19 09:04:54.160519 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-02-19 09:04:54.160525 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-02-19 09:04:54.160532 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-02-19 09:04:54.160539 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-02-19 09:04:54.160546 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-02-19 09:04:54.160553 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-02-19 09:04:54.160560 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-02-19 09:04:54.160566 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-02-19 09:04:54.160573 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-02-19 09:04:54.160580 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-02-19 09:04:54.160587 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-02-19 09:04:54.160594 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-02-19 09:04:54.160606 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-02-19 09:04:54.160613 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-02-19 09:04:54.160620 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-02-19 09:04:54.160633 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-02-19 09:04:54.160640 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-19 09:04:54.160647 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-19 09:04:54.160654 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-02-19 09:04:54.160661 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-02-19 09:04:54.160668 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-19 09:04:54.160674 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-19 09:04:54.160681 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-02-19 09:04:54.160688 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-19 09:04:54.160695 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-19 09:04:54.160702 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-19 09:04:54.160709 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-19 09:04:54.160715 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-19 09:04:54.160722 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-19 09:04:54.160729 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-19 09:04:54.160736 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-19 09:04:54.160743 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-19 09:04:54.160750 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-19 09:04:54.160756 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-19 09:04:54.160763 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-19 09:04:54.160770 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-19 09:04:54.160778 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-19 09:04:54.160790 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-19 09:04:54.160801 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-19 09:04:54.160813 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-19 09:04:54.160823 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-19 09:04:54.160835 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-19 09:04:54.160846 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-19 09:04:54.160859 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-19 09:04:54.160928 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-19 09:04:54.160945 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-19 09:04:54.160958 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-19 09:04:54.160970 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-19 09:04:54.160980 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-19 09:04:54.160992 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-19 09:04:54.160999 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-19 09:04:54.161006 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-19 09:04:54.161013 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-02-19 09:04:54.161020 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-19 09:04:54.161027 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-02-19 09:04:54.161040 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-19 09:04:54.161047 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-02-19 09:04:54.161054 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-02-19 09:04:54.161061 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-19 09:04:54.161068 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-02-19 09:04:54.161075 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-02-19 09:04:54.161082 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-02-19 09:04:54.161093 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-02-19 09:04:54.161100 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-02-19 09:04:54.161107 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-02-19 09:04:54.161114 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-02-19 09:04:54.161121 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-02-19 09:04:54.161143 | orchestrator | 2025-02-19 09:04:54.161151 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-19 09:04:54.161158 | orchestrator | Wednesday 19 February 2025 08:53:41 +0000 (0:00:06.197) 0:04:08.460 **** 2025-02-19 09:04:54.161165 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.161175 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.161182 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.161190 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.161197 | orchestrator | 2025-02-19 09:04:54.161204 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-02-19 09:04:54.161211 | orchestrator | Wednesday 19 February 2025 08:53:42 +0000 (0:00:01.259) 0:04:09.720 **** 2025-02-19 09:04:54.161218 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-19 09:04:54.161226 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-19 09:04:54.161233 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-19 09:04:54.161240 | orchestrator | 2025-02-19 09:04:54.161247 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-02-19 09:04:54.161254 | orchestrator | Wednesday 19 February 2025 08:53:43 +0000 (0:00:00.976) 0:04:10.697 **** 2025-02-19 09:04:54.161261 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-19 09:04:54.161268 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-19 09:04:54.161275 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-19 09:04:54.161282 | orchestrator | 2025-02-19 09:04:54.161289 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-19 09:04:54.161296 | orchestrator | Wednesday 19 February 2025 08:53:45 +0000 (0:00:01.176) 0:04:11.873 **** 2025-02-19 09:04:54.161303 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.161310 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.161317 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.161324 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.161331 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.161338 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.161345 | orchestrator | 2025-02-19 09:04:54.161352 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-19 09:04:54.161359 | orchestrator | Wednesday 19 February 2025 08:53:45 +0000 (0:00:00.778) 0:04:12.651 **** 2025-02-19 09:04:54.161373 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.161381 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.161388 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.161395 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.161402 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.161408 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.161415 | orchestrator | 2025-02-19 09:04:54.161422 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-19 09:04:54.161429 | orchestrator | Wednesday 19 February 2025 08:53:46 +0000 (0:00:00.558) 0:04:13.210 **** 2025-02-19 09:04:54.161436 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.161491 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.161501 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.161508 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.161515 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.161522 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.161529 | orchestrator | 2025-02-19 09:04:54.161536 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-19 09:04:54.161546 | orchestrator | Wednesday 19 February 2025 08:53:47 +0000 (0:00:00.728) 0:04:13.938 **** 2025-02-19 09:04:54.161553 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.161560 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.161567 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.161574 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.161581 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.161588 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.161595 | orchestrator | 2025-02-19 09:04:54.161602 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-19 09:04:54.161609 | orchestrator | Wednesday 19 February 2025 08:53:47 +0000 (0:00:00.566) 0:04:14.505 **** 2025-02-19 09:04:54.161616 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.161623 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.161634 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.161641 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.161648 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.161655 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.161662 | orchestrator | 2025-02-19 09:04:54.161669 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-19 09:04:54.161676 | orchestrator | Wednesday 19 February 2025 08:53:48 +0000 (0:00:00.813) 0:04:15.318 **** 2025-02-19 09:04:54.161683 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.161690 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.161697 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.161704 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.161711 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.161718 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.161724 | orchestrator | 2025-02-19 09:04:54.161731 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-19 09:04:54.161738 | orchestrator | Wednesday 19 February 2025 08:53:49 +0000 (0:00:00.668) 0:04:15.987 **** 2025-02-19 09:04:54.161746 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.161752 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.161759 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.161766 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.161773 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.161779 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.161786 | orchestrator | 2025-02-19 09:04:54.161793 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-19 09:04:54.161800 | orchestrator | Wednesday 19 February 2025 08:53:50 +0000 (0:00:00.881) 0:04:16.869 **** 2025-02-19 09:04:54.161808 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.161822 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.161829 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.161836 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.161843 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.161849 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.161856 | orchestrator | 2025-02-19 09:04:54.161863 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-19 09:04:54.161870 | orchestrator | Wednesday 19 February 2025 08:53:51 +0000 (0:00:01.045) 0:04:17.914 **** 2025-02-19 09:04:54.161877 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.161884 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.161891 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.161898 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.161905 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.161912 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.161919 | orchestrator | 2025-02-19 09:04:54.161926 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-19 09:04:54.161933 | orchestrator | Wednesday 19 February 2025 08:53:53 +0000 (0:00:02.438) 0:04:20.353 **** 2025-02-19 09:04:54.161940 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.161947 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.161954 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.161961 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.161968 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.161975 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.161982 | orchestrator | 2025-02-19 09:04:54.161989 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-19 09:04:54.161996 | orchestrator | Wednesday 19 February 2025 08:53:54 +0000 (0:00:00.908) 0:04:21.262 **** 2025-02-19 09:04:54.162003 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-19 09:04:54.162010 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-19 09:04:54.162037 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.162046 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-19 09:04:54.162053 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-19 09:04:54.162060 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.162067 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-19 09:04:54.162074 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-19 09:04:54.162081 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.162088 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.162095 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.162102 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.162109 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.162116 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.162136 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.162145 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.162153 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.162161 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.162169 | orchestrator | 2025-02-19 09:04:54.162176 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-19 09:04:54.162232 | orchestrator | Wednesday 19 February 2025 08:53:55 +0000 (0:00:01.366) 0:04:22.628 **** 2025-02-19 09:04:54.162247 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-19 09:04:54.162259 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-19 09:04:54.162270 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.162281 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-19 09:04:54.162292 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-19 09:04:54.162303 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.162312 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-19 09:04:54.162330 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-19 09:04:54.162340 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.162351 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-02-19 09:04:54.162362 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-02-19 09:04:54.162373 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-02-19 09:04:54.162385 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-02-19 09:04:54.162395 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-02-19 09:04:54.162407 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-02-19 09:04:54.162418 | orchestrator | 2025-02-19 09:04:54.162430 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-19 09:04:54.162441 | orchestrator | Wednesday 19 February 2025 08:53:56 +0000 (0:00:00.916) 0:04:23.545 **** 2025-02-19 09:04:54.162453 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.162464 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.162474 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.162484 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.162495 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.162506 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.162517 | orchestrator | 2025-02-19 09:04:54.162528 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-19 09:04:54.162539 | orchestrator | Wednesday 19 February 2025 08:53:57 +0000 (0:00:01.094) 0:04:24.639 **** 2025-02-19 09:04:54.162550 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.162567 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.162578 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.162589 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.162599 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.162610 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.162622 | orchestrator | 2025-02-19 09:04:54.162633 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-19 09:04:54.162645 | orchestrator | Wednesday 19 February 2025 08:53:58 +0000 (0:00:00.839) 0:04:25.479 **** 2025-02-19 09:04:54.162656 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.162667 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.162677 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.162687 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.162698 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.162708 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.162719 | orchestrator | 2025-02-19 09:04:54.162731 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-19 09:04:54.162742 | orchestrator | Wednesday 19 February 2025 08:53:59 +0000 (0:00:01.170) 0:04:26.649 **** 2025-02-19 09:04:54.162752 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.162764 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.162775 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.162785 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.162796 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.162807 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.162818 | orchestrator | 2025-02-19 09:04:54.162828 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-19 09:04:54.162840 | orchestrator | Wednesday 19 February 2025 08:54:00 +0000 (0:00:00.887) 0:04:27.537 **** 2025-02-19 09:04:54.162850 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.162861 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.162871 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.162882 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.162892 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.162903 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.162914 | orchestrator | 2025-02-19 09:04:54.162926 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-19 09:04:54.162946 | orchestrator | Wednesday 19 February 2025 08:54:01 +0000 (0:00:01.004) 0:04:28.542 **** 2025-02-19 09:04:54.162957 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.162968 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.162978 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.162990 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.163002 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.163013 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.163025 | orchestrator | 2025-02-19 09:04:54.163036 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-19 09:04:54.163047 | orchestrator | Wednesday 19 February 2025 08:54:02 +0000 (0:00:01.013) 0:04:29.555 **** 2025-02-19 09:04:54.163057 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.163068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.163080 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.163091 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.163102 | orchestrator | 2025-02-19 09:04:54.163113 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-19 09:04:54.163177 | orchestrator | Wednesday 19 February 2025 08:54:03 +0000 (0:00:00.635) 0:04:30.190 **** 2025-02-19 09:04:54.163192 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.163204 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.163216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.163226 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.163233 | orchestrator | 2025-02-19 09:04:54.163322 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-19 09:04:54.163333 | orchestrator | Wednesday 19 February 2025 08:54:03 +0000 (0:00:00.550) 0:04:30.741 **** 2025-02-19 09:04:54.163340 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.163348 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.163355 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.163362 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.163369 | orchestrator | 2025-02-19 09:04:54.163376 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.163383 | orchestrator | Wednesday 19 February 2025 08:54:04 +0000 (0:00:00.785) 0:04:31.526 **** 2025-02-19 09:04:54.163390 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.163397 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.163404 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.163411 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.163418 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.163425 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.163432 | orchestrator | 2025-02-19 09:04:54.163446 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-19 09:04:54.163453 | orchestrator | Wednesday 19 February 2025 08:54:05 +0000 (0:00:01.122) 0:04:32.648 **** 2025-02-19 09:04:54.163460 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-19 09:04:54.163467 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.163474 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-19 09:04:54.163481 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.163488 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-19 09:04:54.163495 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.163502 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-19 09:04:54.163509 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-19 09:04:54.163516 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-19 09:04:54.163523 | orchestrator | 2025-02-19 09:04:54.163529 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-19 09:04:54.163535 | orchestrator | Wednesday 19 February 2025 08:54:07 +0000 (0:00:02.134) 0:04:34.783 **** 2025-02-19 09:04:54.163548 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.163555 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.163561 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.163567 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.163573 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.163579 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.163585 | orchestrator | 2025-02-19 09:04:54.163592 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.163598 | orchestrator | Wednesday 19 February 2025 08:54:08 +0000 (0:00:00.974) 0:04:35.757 **** 2025-02-19 09:04:54.163604 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.163613 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.163619 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.163625 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.163631 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.163637 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.163643 | orchestrator | 2025-02-19 09:04:54.163650 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-19 09:04:54.163656 | orchestrator | Wednesday 19 February 2025 08:54:10 +0000 (0:00:01.132) 0:04:36.890 **** 2025-02-19 09:04:54.163662 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-19 09:04:54.163669 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.163675 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-19 09:04:54.163681 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.163687 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-19 09:04:54.163693 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.163700 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.163706 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.163712 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.163718 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.163724 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.163730 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.163736 | orchestrator | 2025-02-19 09:04:54.163743 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-19 09:04:54.163749 | orchestrator | Wednesday 19 February 2025 08:54:11 +0000 (0:00:01.265) 0:04:38.155 **** 2025-02-19 09:04:54.163755 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.163761 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.163767 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.163774 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.163781 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.163787 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.163793 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.163800 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.163806 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.163812 | orchestrator | 2025-02-19 09:04:54.163818 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-19 09:04:54.163824 | orchestrator | Wednesday 19 February 2025 08:54:12 +0000 (0:00:01.109) 0:04:39.264 **** 2025-02-19 09:04:54.163831 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.163837 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.163843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.163849 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.163855 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-19 09:04:54.163900 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-19 09:04:54.163913 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-19 09:04:54.163919 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.163926 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-19 09:04:54.163932 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-19 09:04:54.163938 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-19 09:04:54.163944 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.163950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.163957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.163963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.163969 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.163975 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-19 09:04:54.163981 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-19 09:04:54.163987 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-19 09:04:54.163993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-19 09:04:54.163999 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-19 09:04:54.164005 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.164012 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-19 09:04:54.164018 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.164024 | orchestrator | 2025-02-19 09:04:54.164030 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-19 09:04:54.164036 | orchestrator | Wednesday 19 February 2025 08:54:15 +0000 (0:00:02.725) 0:04:41.990 **** 2025-02-19 09:04:54.164042 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.164048 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.164055 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.164061 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.164067 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.164073 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.164079 | orchestrator | 2025-02-19 09:04:54.164085 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-19 09:04:54.164091 | orchestrator | Wednesday 19 February 2025 08:54:20 +0000 (0:00:05.642) 0:04:47.633 **** 2025-02-19 09:04:54.164097 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.164104 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.164110 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.164116 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.164134 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.164141 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.164147 | orchestrator | 2025-02-19 09:04:54.164153 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-02-19 09:04:54.164159 | orchestrator | Wednesday 19 February 2025 08:54:21 +0000 (0:00:01.126) 0:04:48.759 **** 2025-02-19 09:04:54.164165 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.164172 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.164178 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.164184 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.164191 | orchestrator | 2025-02-19 09:04:54.164197 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-02-19 09:04:54.164203 | orchestrator | Wednesday 19 February 2025 08:54:23 +0000 (0:00:01.467) 0:04:50.226 **** 2025-02-19 09:04:54.164209 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.164216 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.164222 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.164228 | orchestrator | 2025-02-19 09:04:54.164234 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-02-19 09:04:54.164240 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.164251 | orchestrator | 2025-02-19 09:04:54.164257 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-02-19 09:04:54.164263 | orchestrator | Wednesday 19 February 2025 08:54:24 +0000 (0:00:01.335) 0:04:51.562 **** 2025-02-19 09:04:54.164269 | orchestrator | 2025-02-19 09:04:54.164276 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-02-19 09:04:54.164282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.164288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.164294 | orchestrator | 2025-02-19 09:04:54.164301 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-02-19 09:04:54.164307 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.164313 | orchestrator | 2025-02-19 09:04:54.164319 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-02-19 09:04:54.164325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.164332 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.164343 | orchestrator | 2025-02-19 09:04:54.164349 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-02-19 09:04:54.164356 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.164362 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.164368 | orchestrator | 2025-02-19 09:04:54.164374 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-02-19 09:04:54.164380 | orchestrator | Wednesday 19 February 2025 08:54:26 +0000 (0:00:01.740) 0:04:53.303 **** 2025-02-19 09:04:54.164387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-19 09:04:54.164393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-19 09:04:54.164399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-19 09:04:54.164405 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.164411 | orchestrator | 2025-02-19 09:04:54.164417 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-02-19 09:04:54.164424 | orchestrator | Wednesday 19 February 2025 08:54:27 +0000 (0:00:01.035) 0:04:54.338 **** 2025-02-19 09:04:54.164430 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.164480 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.164490 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.164497 | orchestrator | 2025-02-19 09:04:54.164503 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-02-19 09:04:54.164509 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.164516 | orchestrator | 2025-02-19 09:04:54.164522 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-02-19 09:04:54.164528 | orchestrator | Wednesday 19 February 2025 08:54:28 +0000 (0:00:01.263) 0:04:55.601 **** 2025-02-19 09:04:54.164535 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.164544 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.164551 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.164557 | orchestrator | 2025-02-19 09:04:54.164563 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-02-19 09:04:54.164569 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.164575 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.164582 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.164588 | orchestrator | 2025-02-19 09:04:54.164594 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-19 09:04:54.164600 | orchestrator | Wednesday 19 February 2025 08:54:29 +0000 (0:00:00.793) 0:04:56.394 **** 2025-02-19 09:04:54.164606 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.164612 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.164619 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.164625 | orchestrator | 2025-02-19 09:04:54.164631 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-02-19 09:04:54.164645 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.164651 | orchestrator | 2025-02-19 09:04:54.164657 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-19 09:04:54.164664 | orchestrator | Wednesday 19 February 2025 08:54:30 +0000 (0:00:00.611) 0:04:57.005 **** 2025-02-19 09:04:54.164670 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.164676 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.164682 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.164688 | orchestrator | 2025-02-19 09:04:54.164694 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-02-19 09:04:54.164700 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.164707 | orchestrator | 2025-02-19 09:04:54.164713 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-02-19 09:04:54.164719 | orchestrator | Wednesday 19 February 2025 08:54:31 +0000 (0:00:01.247) 0:04:58.253 **** 2025-02-19 09:04:54.164725 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.164731 | orchestrator | 2025-02-19 09:04:54.164738 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-02-19 09:04:54.164744 | orchestrator | Wednesday 19 February 2025 08:54:31 +0000 (0:00:00.163) 0:04:58.416 **** 2025-02-19 09:04:54.164750 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.164756 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.164763 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.164769 | orchestrator | 2025-02-19 09:04:54.164775 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-02-19 09:04:54.164781 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.164788 | orchestrator | 2025-02-19 09:04:54.164794 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-19 09:04:54.164800 | orchestrator | Wednesday 19 February 2025 08:54:32 +0000 (0:00:00.721) 0:04:59.138 **** 2025-02-19 09:04:54.164806 | orchestrator | 2025-02-19 09:04:54.164812 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-02-19 09:04:54.164819 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.164825 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.164831 | orchestrator | 2025-02-19 09:04:54.164838 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-02-19 09:04:54.164844 | orchestrator | Wednesday 19 February 2025 08:54:33 +0000 (0:00:01.386) 0:05:00.524 **** 2025-02-19 09:04:54.164850 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.164856 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.164863 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.164869 | orchestrator | 2025-02-19 09:04:54.164875 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-02-19 09:04:54.164881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.164888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.164894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.164900 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.164907 | orchestrator | 2025-02-19 09:04:54.164913 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-19 09:04:54.164919 | orchestrator | Wednesday 19 February 2025 08:54:34 +0000 (0:00:01.278) 0:05:01.803 **** 2025-02-19 09:04:54.164925 | orchestrator | 2025-02-19 09:04:54.164932 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-02-19 09:04:54.164938 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.164944 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.164950 | orchestrator | 2025-02-19 09:04:54.164956 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-19 09:04:54.164963 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.164969 | orchestrator | 2025-02-19 09:04:54.164975 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-02-19 09:04:54.164985 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.164991 | orchestrator | 2025-02-19 09:04:54.164998 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-19 09:04:54.165004 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.165010 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.165016 | orchestrator | 2025-02-19 09:04:54.165023 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-02-19 09:04:54.165029 | orchestrator | Wednesday 19 February 2025 08:54:36 +0000 (0:00:01.790) 0:05:03.594 **** 2025-02-19 09:04:54.165035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-19 09:04:54.165041 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-19 09:04:54.165047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-19 09:04:54.165091 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.165100 | orchestrator | 2025-02-19 09:04:54.165107 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-02-19 09:04:54.165113 | orchestrator | Wednesday 19 February 2025 08:54:37 +0000 (0:00:00.869) 0:05:04.464 **** 2025-02-19 09:04:54.165119 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.165136 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.165143 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.165149 | orchestrator | 2025-02-19 09:04:54.165155 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-02-19 09:04:54.165162 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.165168 | orchestrator | 2025-02-19 09:04:54.165177 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-19 09:04:54.165183 | orchestrator | Wednesday 19 February 2025 08:54:39 +0000 (0:00:01.381) 0:05:05.845 **** 2025-02-19 09:04:54.165189 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.165196 | orchestrator | 2025-02-19 09:04:54.165202 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-02-19 09:04:54.165208 | orchestrator | Wednesday 19 February 2025 08:54:39 +0000 (0:00:00.878) 0:05:06.723 **** 2025-02-19 09:04:54.165214 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.165220 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.165226 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.165234 | orchestrator | 2025-02-19 09:04:54.165244 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-02-19 09:04:54.165254 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.165265 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.165275 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.165285 | orchestrator | 2025-02-19 09:04:54.165294 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-02-19 09:04:54.165304 | orchestrator | Wednesday 19 February 2025 08:54:42 +0000 (0:00:02.612) 0:05:09.336 **** 2025-02-19 09:04:54.165313 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.165323 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.165333 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.165343 | orchestrator | 2025-02-19 09:04:54.165350 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-19 09:04:54.165356 | orchestrator | Wednesday 19 February 2025 08:54:44 +0000 (0:00:02.204) 0:05:11.541 **** 2025-02-19 09:04:54.165362 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.165368 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.165375 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.165381 | orchestrator | 2025-02-19 09:04:54.165387 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-02-19 09:04:54.165393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.165400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.165406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.165412 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.165423 | orchestrator | 2025-02-19 09:04:54.165429 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-02-19 09:04:54.165436 | orchestrator | Wednesday 19 February 2025 08:54:47 +0000 (0:00:02.374) 0:05:13.916 **** 2025-02-19 09:04:54.165442 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.165448 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.165454 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.165460 | orchestrator | 2025-02-19 09:04:54.165467 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-19 09:04:54.165473 | orchestrator | Wednesday 19 February 2025 08:54:48 +0000 (0:00:00.972) 0:05:14.888 **** 2025-02-19 09:04:54.165480 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.165486 | orchestrator | 2025-02-19 09:04:54.165492 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-02-19 09:04:54.165498 | orchestrator | Wednesday 19 February 2025 08:54:48 +0000 (0:00:00.660) 0:05:15.549 **** 2025-02-19 09:04:54.165504 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.165511 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.165521 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.165528 | orchestrator | 2025-02-19 09:04:54.165534 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-02-19 09:04:54.165541 | orchestrator | Wednesday 19 February 2025 08:54:49 +0000 (0:00:00.299) 0:05:15.849 **** 2025-02-19 09:04:54.165547 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.165553 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.165560 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.165566 | orchestrator | 2025-02-19 09:04:54.165572 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-02-19 09:04:54.165578 | orchestrator | Wednesday 19 February 2025 08:54:50 +0000 (0:00:00.973) 0:05:16.823 **** 2025-02-19 09:04:54.165584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.165591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.165597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.165603 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.165609 | orchestrator | 2025-02-19 09:04:54.165615 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-02-19 09:04:54.165622 | orchestrator | Wednesday 19 February 2025 08:54:50 +0000 (0:00:00.606) 0:05:17.429 **** 2025-02-19 09:04:54.165628 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.165634 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.165640 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.165647 | orchestrator | 2025-02-19 09:04:54.165653 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-02-19 09:04:54.165659 | orchestrator | Wednesday 19 February 2025 08:54:50 +0000 (0:00:00.296) 0:05:17.725 **** 2025-02-19 09:04:54.165665 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.165671 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.165678 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.165684 | orchestrator | 2025-02-19 09:04:54.165690 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-19 09:04:54.165739 | orchestrator | Wednesday 19 February 2025 08:54:51 +0000 (0:00:00.313) 0:05:18.039 **** 2025-02-19 09:04:54.165749 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.165755 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.165761 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.165768 | orchestrator | 2025-02-19 09:04:54.165774 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-02-19 09:04:54.165780 | orchestrator | Wednesday 19 February 2025 08:54:51 +0000 (0:00:00.303) 0:05:18.342 **** 2025-02-19 09:04:54.165786 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.165792 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.165799 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.165808 | orchestrator | 2025-02-19 09:04:54.165814 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-19 09:04:54.165821 | orchestrator | Wednesday 19 February 2025 08:54:52 +0000 (0:00:00.528) 0:05:18.871 **** 2025-02-19 09:04:54.165827 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.165833 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.165839 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.165845 | orchestrator | 2025-02-19 09:04:54.165851 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-02-19 09:04:54.165858 | orchestrator | 2025-02-19 09:04:54.165864 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-19 09:04:54.165873 | orchestrator | Wednesday 19 February 2025 08:54:54 +0000 (0:00:02.073) 0:05:20.945 **** 2025-02-19 09:04:54.165880 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.165886 | orchestrator | 2025-02-19 09:04:54.165892 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-19 09:04:54.165903 | orchestrator | Wednesday 19 February 2025 08:54:54 +0000 (0:00:00.712) 0:05:21.658 **** 2025-02-19 09:04:54.165914 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.165925 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.165935 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.165946 | orchestrator | 2025-02-19 09:04:54.165957 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-19 09:04:54.165969 | orchestrator | Wednesday 19 February 2025 08:54:55 +0000 (0:00:00.746) 0:05:22.404 **** 2025-02-19 09:04:54.165976 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.165983 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.165989 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.165995 | orchestrator | 2025-02-19 09:04:54.166001 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-19 09:04:54.166008 | orchestrator | Wednesday 19 February 2025 08:54:56 +0000 (0:00:00.438) 0:05:22.842 **** 2025-02-19 09:04:54.166031 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166039 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166045 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166052 | orchestrator | 2025-02-19 09:04:54.166058 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-19 09:04:54.166064 | orchestrator | Wednesday 19 February 2025 08:54:56 +0000 (0:00:00.712) 0:05:23.554 **** 2025-02-19 09:04:54.166070 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166076 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166083 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166089 | orchestrator | 2025-02-19 09:04:54.166095 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-19 09:04:54.166101 | orchestrator | Wednesday 19 February 2025 08:54:57 +0000 (0:00:00.411) 0:05:23.966 **** 2025-02-19 09:04:54.166107 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.166114 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.166120 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.166143 | orchestrator | 2025-02-19 09:04:54.166153 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-19 09:04:54.166162 | orchestrator | Wednesday 19 February 2025 08:54:58 +0000 (0:00:00.889) 0:05:24.856 **** 2025-02-19 09:04:54.166168 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166175 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166181 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166187 | orchestrator | 2025-02-19 09:04:54.166193 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-19 09:04:54.166199 | orchestrator | Wednesday 19 February 2025 08:54:58 +0000 (0:00:00.418) 0:05:25.275 **** 2025-02-19 09:04:54.166206 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166212 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166223 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166229 | orchestrator | 2025-02-19 09:04:54.166235 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-19 09:04:54.166241 | orchestrator | Wednesday 19 February 2025 08:54:59 +0000 (0:00:00.638) 0:05:25.913 **** 2025-02-19 09:04:54.166247 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166254 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166260 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166266 | orchestrator | 2025-02-19 09:04:54.166272 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-19 09:04:54.166278 | orchestrator | Wednesday 19 February 2025 08:54:59 +0000 (0:00:00.326) 0:05:26.239 **** 2025-02-19 09:04:54.166285 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166291 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166297 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166303 | orchestrator | 2025-02-19 09:04:54.166309 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-19 09:04:54.166315 | orchestrator | Wednesday 19 February 2025 08:54:59 +0000 (0:00:00.294) 0:05:26.534 **** 2025-02-19 09:04:54.166322 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166328 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166334 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166343 | orchestrator | 2025-02-19 09:04:54.166349 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-19 09:04:54.166356 | orchestrator | Wednesday 19 February 2025 08:55:00 +0000 (0:00:00.494) 0:05:27.028 **** 2025-02-19 09:04:54.166362 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.166368 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.166418 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.166427 | orchestrator | 2025-02-19 09:04:54.166433 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-19 09:04:54.166440 | orchestrator | Wednesday 19 February 2025 08:55:01 +0000 (0:00:01.404) 0:05:28.433 **** 2025-02-19 09:04:54.166446 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166452 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166458 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166464 | orchestrator | 2025-02-19 09:04:54.166471 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-19 09:04:54.166477 | orchestrator | Wednesday 19 February 2025 08:55:02 +0000 (0:00:00.703) 0:05:29.136 **** 2025-02-19 09:04:54.166483 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.166489 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.166495 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.166501 | orchestrator | 2025-02-19 09:04:54.166507 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-19 09:04:54.166514 | orchestrator | Wednesday 19 February 2025 08:55:03 +0000 (0:00:00.924) 0:05:30.060 **** 2025-02-19 09:04:54.166520 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166526 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166532 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166538 | orchestrator | 2025-02-19 09:04:54.166544 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-19 09:04:54.166551 | orchestrator | Wednesday 19 February 2025 08:55:04 +0000 (0:00:00.922) 0:05:30.982 **** 2025-02-19 09:04:54.166557 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166563 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166569 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166575 | orchestrator | 2025-02-19 09:04:54.166581 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-19 09:04:54.166590 | orchestrator | Wednesday 19 February 2025 08:55:05 +0000 (0:00:00.873) 0:05:31.855 **** 2025-02-19 09:04:54.166597 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166603 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166609 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166620 | orchestrator | 2025-02-19 09:04:54.166626 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-19 09:04:54.166633 | orchestrator | Wednesday 19 February 2025 08:55:05 +0000 (0:00:00.639) 0:05:32.495 **** 2025-02-19 09:04:54.166639 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166645 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166651 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166657 | orchestrator | 2025-02-19 09:04:54.166664 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-19 09:04:54.166670 | orchestrator | Wednesday 19 February 2025 08:55:06 +0000 (0:00:00.540) 0:05:33.035 **** 2025-02-19 09:04:54.166676 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166683 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166689 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166695 | orchestrator | 2025-02-19 09:04:54.166701 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-19 09:04:54.166708 | orchestrator | Wednesday 19 February 2025 08:55:06 +0000 (0:00:00.466) 0:05:33.502 **** 2025-02-19 09:04:54.166714 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.166720 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.166726 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.166733 | orchestrator | 2025-02-19 09:04:54.166739 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-19 09:04:54.166745 | orchestrator | Wednesday 19 February 2025 08:55:07 +0000 (0:00:00.874) 0:05:34.376 **** 2025-02-19 09:04:54.166751 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.166758 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.166764 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.166770 | orchestrator | 2025-02-19 09:04:54.166777 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-19 09:04:54.166783 | orchestrator | Wednesday 19 February 2025 08:55:08 +0000 (0:00:00.823) 0:05:35.200 **** 2025-02-19 09:04:54.166789 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166795 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166802 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166808 | orchestrator | 2025-02-19 09:04:54.166814 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-19 09:04:54.166820 | orchestrator | Wednesday 19 February 2025 08:55:08 +0000 (0:00:00.604) 0:05:35.804 **** 2025-02-19 09:04:54.166827 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166833 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166839 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166845 | orchestrator | 2025-02-19 09:04:54.166852 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-19 09:04:54.166858 | orchestrator | Wednesday 19 February 2025 08:55:09 +0000 (0:00:00.595) 0:05:36.399 **** 2025-02-19 09:04:54.166864 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166870 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166877 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166883 | orchestrator | 2025-02-19 09:04:54.166889 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-19 09:04:54.166896 | orchestrator | Wednesday 19 February 2025 08:55:10 +0000 (0:00:00.900) 0:05:37.300 **** 2025-02-19 09:04:54.166902 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166908 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166914 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166920 | orchestrator | 2025-02-19 09:04:54.166927 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-19 09:04:54.166933 | orchestrator | Wednesday 19 February 2025 08:55:11 +0000 (0:00:00.561) 0:05:37.862 **** 2025-02-19 09:04:54.166939 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166946 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.166952 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.166958 | orchestrator | 2025-02-19 09:04:54.166964 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-19 09:04:54.166974 | orchestrator | Wednesday 19 February 2025 08:55:11 +0000 (0:00:00.631) 0:05:38.493 **** 2025-02-19 09:04:54.166980 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.166987 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167048 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167061 | orchestrator | 2025-02-19 09:04:54.167072 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-19 09:04:54.167090 | orchestrator | Wednesday 19 February 2025 08:55:12 +0000 (0:00:00.553) 0:05:39.047 **** 2025-02-19 09:04:54.167099 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.167108 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167117 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167177 | orchestrator | 2025-02-19 09:04:54.167189 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-19 09:04:54.167198 | orchestrator | Wednesday 19 February 2025 08:55:12 +0000 (0:00:00.703) 0:05:39.750 **** 2025-02-19 09:04:54.167208 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.167218 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167228 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167237 | orchestrator | 2025-02-19 09:04:54.167247 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-19 09:04:54.167258 | orchestrator | Wednesday 19 February 2025 08:55:13 +0000 (0:00:00.564) 0:05:40.315 **** 2025-02-19 09:04:54.167268 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.167278 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167287 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167297 | orchestrator | 2025-02-19 09:04:54.167307 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-19 09:04:54.167318 | orchestrator | Wednesday 19 February 2025 08:55:13 +0000 (0:00:00.365) 0:05:40.681 **** 2025-02-19 09:04:54.167328 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.167338 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167348 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167357 | orchestrator | 2025-02-19 09:04:54.167367 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-19 09:04:54.167376 | orchestrator | Wednesday 19 February 2025 08:55:14 +0000 (0:00:00.339) 0:05:41.020 **** 2025-02-19 09:04:54.167386 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.167396 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167407 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167417 | orchestrator | 2025-02-19 09:04:54.167427 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-19 09:04:54.167436 | orchestrator | Wednesday 19 February 2025 08:55:14 +0000 (0:00:00.536) 0:05:41.557 **** 2025-02-19 09:04:54.167446 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.167455 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167465 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167474 | orchestrator | 2025-02-19 09:04:54.167484 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-19 09:04:54.167494 | orchestrator | Wednesday 19 February 2025 08:55:15 +0000 (0:00:00.368) 0:05:41.926 **** 2025-02-19 09:04:54.167504 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-19 09:04:54.167513 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-19 09:04:54.167523 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.167532 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-19 09:04:54.167542 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-19 09:04:54.167552 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167561 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-19 09:04:54.167575 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-19 09:04:54.167585 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167607 | orchestrator | 2025-02-19 09:04:54.167619 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-19 09:04:54.167629 | orchestrator | Wednesday 19 February 2025 08:55:15 +0000 (0:00:00.530) 0:05:42.456 **** 2025-02-19 09:04:54.167638 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-19 09:04:54.167647 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-19 09:04:54.167656 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.167665 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-19 09:04:54.167675 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-19 09:04:54.167686 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167698 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-19 09:04:54.167710 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-19 09:04:54.167721 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167733 | orchestrator | 2025-02-19 09:04:54.167743 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-19 09:04:54.167754 | orchestrator | Wednesday 19 February 2025 08:55:16 +0000 (0:00:00.475) 0:05:42.931 **** 2025-02-19 09:04:54.167766 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.167777 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167790 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167803 | orchestrator | 2025-02-19 09:04:54.167814 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-19 09:04:54.167826 | orchestrator | Wednesday 19 February 2025 08:55:16 +0000 (0:00:00.834) 0:05:43.766 **** 2025-02-19 09:04:54.167838 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.167850 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167863 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167876 | orchestrator | 2025-02-19 09:04:54.167887 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-19 09:04:54.167899 | orchestrator | Wednesday 19 February 2025 08:55:17 +0000 (0:00:00.547) 0:05:44.314 **** 2025-02-19 09:04:54.167912 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.167924 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.167938 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.167949 | orchestrator | 2025-02-19 09:04:54.167961 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-19 09:04:54.167974 | orchestrator | Wednesday 19 February 2025 08:55:17 +0000 (0:00:00.466) 0:05:44.780 **** 2025-02-19 09:04:54.168072 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168084 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.168094 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.168104 | orchestrator | 2025-02-19 09:04:54.168114 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-19 09:04:54.168138 | orchestrator | Wednesday 19 February 2025 08:55:18 +0000 (0:00:00.974) 0:05:45.755 **** 2025-02-19 09:04:54.168149 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168159 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.168169 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.168178 | orchestrator | 2025-02-19 09:04:54.168188 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-19 09:04:54.168196 | orchestrator | Wednesday 19 February 2025 08:55:20 +0000 (0:00:01.063) 0:05:46.819 **** 2025-02-19 09:04:54.168204 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168213 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.168221 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.168229 | orchestrator | 2025-02-19 09:04:54.168237 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-19 09:04:54.168246 | orchestrator | Wednesday 19 February 2025 08:55:20 +0000 (0:00:00.600) 0:05:47.419 **** 2025-02-19 09:04:54.168263 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.168284 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.168292 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.168301 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168310 | orchestrator | 2025-02-19 09:04:54.168318 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-19 09:04:54.168327 | orchestrator | Wednesday 19 February 2025 08:55:21 +0000 (0:00:00.515) 0:05:47.934 **** 2025-02-19 09:04:54.168335 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.168344 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.168353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.168362 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168371 | orchestrator | 2025-02-19 09:04:54.168379 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-19 09:04:54.168388 | orchestrator | Wednesday 19 February 2025 08:55:21 +0000 (0:00:00.516) 0:05:48.451 **** 2025-02-19 09:04:54.168397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.168406 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.168415 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.168424 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168432 | orchestrator | 2025-02-19 09:04:54.168441 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.168450 | orchestrator | Wednesday 19 February 2025 08:55:22 +0000 (0:00:00.512) 0:05:48.964 **** 2025-02-19 09:04:54.168459 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168468 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.168477 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.168485 | orchestrator | 2025-02-19 09:04:54.168494 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-19 09:04:54.168503 | orchestrator | Wednesday 19 February 2025 08:55:22 +0000 (0:00:00.414) 0:05:49.378 **** 2025-02-19 09:04:54.168512 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-19 09:04:54.168521 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168530 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-19 09:04:54.168539 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.168549 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-19 09:04:54.168557 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.168566 | orchestrator | 2025-02-19 09:04:54.168574 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-19 09:04:54.168584 | orchestrator | Wednesday 19 February 2025 08:55:23 +0000 (0:00:01.188) 0:05:50.566 **** 2025-02-19 09:04:54.168593 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168603 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.168612 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.168628 | orchestrator | 2025-02-19 09:04:54.168638 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.168648 | orchestrator | Wednesday 19 February 2025 08:55:24 +0000 (0:00:00.464) 0:05:51.030 **** 2025-02-19 09:04:54.168658 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168668 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.168677 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.168687 | orchestrator | 2025-02-19 09:04:54.168699 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-19 09:04:54.168712 | orchestrator | Wednesday 19 February 2025 08:55:24 +0000 (0:00:00.462) 0:05:51.493 **** 2025-02-19 09:04:54.168725 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-19 09:04:54.168738 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168752 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-19 09:04:54.168764 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.168788 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-19 09:04:54.168801 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.168812 | orchestrator | 2025-02-19 09:04:54.168822 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-19 09:04:54.168831 | orchestrator | Wednesday 19 February 2025 08:55:25 +0000 (0:00:00.646) 0:05:52.139 **** 2025-02-19 09:04:54.168840 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168849 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.168859 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.168868 | orchestrator | 2025-02-19 09:04:54.168878 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-19 09:04:54.168891 | orchestrator | Wednesday 19 February 2025 08:55:26 +0000 (0:00:00.875) 0:05:53.014 **** 2025-02-19 09:04:54.168901 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.168946 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.168954 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.168961 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.168968 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-19 09:04:54.168975 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-19 09:04:54.168981 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-19 09:04:54.168988 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.168998 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-19 09:04:54.169005 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-19 09:04:54.169011 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-19 09:04:54.169018 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.169025 | orchestrator | 2025-02-19 09:04:54.169032 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-19 09:04:54.169038 | orchestrator | Wednesday 19 February 2025 08:55:27 +0000 (0:00:00.988) 0:05:54.003 **** 2025-02-19 09:04:54.169045 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.169051 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.169057 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.169063 | orchestrator | 2025-02-19 09:04:54.169069 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-19 09:04:54.169075 | orchestrator | Wednesday 19 February 2025 08:55:28 +0000 (0:00:01.100) 0:05:55.103 **** 2025-02-19 09:04:54.169080 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.169086 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.169092 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.169098 | orchestrator | 2025-02-19 09:04:54.169104 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-19 09:04:54.169109 | orchestrator | Wednesday 19 February 2025 08:55:29 +0000 (0:00:00.896) 0:05:56.000 **** 2025-02-19 09:04:54.169115 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.169121 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.169148 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.169154 | orchestrator | 2025-02-19 09:04:54.169160 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-19 09:04:54.169166 | orchestrator | Wednesday 19 February 2025 08:55:30 +0000 (0:00:01.063) 0:05:57.064 **** 2025-02-19 09:04:54.169172 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.169178 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.169184 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.169190 | orchestrator | 2025-02-19 09:04:54.169196 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-02-19 09:04:54.169201 | orchestrator | Wednesday 19 February 2025 08:55:31 +0000 (0:00:00.817) 0:05:57.881 **** 2025-02-19 09:04:54.169207 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.169213 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.169219 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.169230 | orchestrator | 2025-02-19 09:04:54.169236 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-02-19 09:04:54.169242 | orchestrator | Wednesday 19 February 2025 08:55:31 +0000 (0:00:00.531) 0:05:58.413 **** 2025-02-19 09:04:54.169249 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.169255 | orchestrator | 2025-02-19 09:04:54.169261 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-02-19 09:04:54.169267 | orchestrator | Wednesday 19 February 2025 08:55:32 +0000 (0:00:01.355) 0:05:59.769 **** 2025-02-19 09:04:54.169273 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.169278 | orchestrator | 2025-02-19 09:04:54.169284 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-02-19 09:04:54.169290 | orchestrator | Wednesday 19 February 2025 08:55:33 +0000 (0:00:00.272) 0:06:00.041 **** 2025-02-19 09:04:54.169296 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-19 09:04:54.169302 | orchestrator | 2025-02-19 09:04:54.169308 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-02-19 09:04:54.169314 | orchestrator | Wednesday 19 February 2025 08:55:34 +0000 (0:00:01.493) 0:06:01.534 **** 2025-02-19 09:04:54.169320 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.169326 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.169332 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.169338 | orchestrator | 2025-02-19 09:04:54.169343 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-02-19 09:04:54.169349 | orchestrator | Wednesday 19 February 2025 08:55:35 +0000 (0:00:00.570) 0:06:02.105 **** 2025-02-19 09:04:54.169355 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.169361 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.169367 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.169373 | orchestrator | 2025-02-19 09:04:54.169379 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-02-19 09:04:54.169385 | orchestrator | Wednesday 19 February 2025 08:55:36 +0000 (0:00:01.075) 0:06:03.181 **** 2025-02-19 09:04:54.169391 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.169397 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.169403 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.169409 | orchestrator | 2025-02-19 09:04:54.169415 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-02-19 09:04:54.169421 | orchestrator | Wednesday 19 February 2025 08:55:38 +0000 (0:00:01.860) 0:06:05.041 **** 2025-02-19 09:04:54.169426 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.169432 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.169438 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.169444 | orchestrator | 2025-02-19 09:04:54.169450 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-02-19 09:04:54.169456 | orchestrator | Wednesday 19 February 2025 08:55:39 +0000 (0:00:01.358) 0:06:06.399 **** 2025-02-19 09:04:54.169462 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.169468 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.169474 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.169479 | orchestrator | 2025-02-19 09:04:54.169485 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-02-19 09:04:54.169507 | orchestrator | Wednesday 19 February 2025 08:55:40 +0000 (0:00:00.827) 0:06:07.226 **** 2025-02-19 09:04:54.169514 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.169519 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.169525 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.169533 | orchestrator | 2025-02-19 09:04:54.169543 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-02-19 09:04:54.169552 | orchestrator | Wednesday 19 February 2025 08:55:41 +0000 (0:00:01.061) 0:06:08.288 **** 2025-02-19 09:04:54.169561 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.169570 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.169586 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.169595 | orchestrator | 2025-02-19 09:04:54.169604 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-02-19 09:04:54.169617 | orchestrator | Wednesday 19 February 2025 08:55:41 +0000 (0:00:00.362) 0:06:08.651 **** 2025-02-19 09:04:54.169626 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.169636 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.169646 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.169660 | orchestrator | 2025-02-19 09:04:54.169666 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-02-19 09:04:54.169672 | orchestrator | Wednesday 19 February 2025 08:55:42 +0000 (0:00:00.353) 0:06:09.005 **** 2025-02-19 09:04:54.169679 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.169685 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.169691 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.169697 | orchestrator | 2025-02-19 09:04:54.169702 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-02-19 09:04:54.169708 | orchestrator | Wednesday 19 February 2025 08:55:42 +0000 (0:00:00.585) 0:06:09.590 **** 2025-02-19 09:04:54.169714 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.169720 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.169726 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.169732 | orchestrator | 2025-02-19 09:04:54.169737 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-02-19 09:04:54.169743 | orchestrator | Wednesday 19 February 2025 08:55:43 +0000 (0:00:00.522) 0:06:10.112 **** 2025-02-19 09:04:54.169749 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.169755 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.169761 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.169767 | orchestrator | 2025-02-19 09:04:54.169773 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-02-19 09:04:54.169779 | orchestrator | Wednesday 19 February 2025 08:55:44 +0000 (0:00:01.408) 0:06:11.521 **** 2025-02-19 09:04:54.169784 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.169790 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.169796 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.169802 | orchestrator | 2025-02-19 09:04:54.169808 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-02-19 09:04:54.169814 | orchestrator | Wednesday 19 February 2025 08:55:45 +0000 (0:00:00.299) 0:06:11.820 **** 2025-02-19 09:04:54.169820 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.169826 | orchestrator | 2025-02-19 09:04:54.169831 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-02-19 09:04:54.169837 | orchestrator | Wednesday 19 February 2025 08:55:46 +0000 (0:00:01.053) 0:06:12.874 **** 2025-02-19 09:04:54.169843 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.169850 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.169860 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.169869 | orchestrator | 2025-02-19 09:04:54.169879 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-02-19 09:04:54.169889 | orchestrator | Wednesday 19 February 2025 08:55:46 +0000 (0:00:00.656) 0:06:13.530 **** 2025-02-19 09:04:54.169898 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.169907 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.169916 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.169922 | orchestrator | 2025-02-19 09:04:54.169928 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-02-19 09:04:54.169933 | orchestrator | Wednesday 19 February 2025 08:55:47 +0000 (0:00:00.561) 0:06:14.092 **** 2025-02-19 09:04:54.169940 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.169946 | orchestrator | 2025-02-19 09:04:54.169951 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-02-19 09:04:54.169962 | orchestrator | Wednesday 19 February 2025 08:55:48 +0000 (0:00:01.027) 0:06:15.120 **** 2025-02-19 09:04:54.169968 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.169988 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.169995 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.170008 | orchestrator | 2025-02-19 09:04:54.170035 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-02-19 09:04:54.170043 | orchestrator | Wednesday 19 February 2025 08:55:49 +0000 (0:00:01.607) 0:06:16.727 **** 2025-02-19 09:04:54.170048 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.170054 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.170060 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.170066 | orchestrator | 2025-02-19 09:04:54.170072 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-02-19 09:04:54.170078 | orchestrator | Wednesday 19 February 2025 08:55:51 +0000 (0:00:01.668) 0:06:18.395 **** 2025-02-19 09:04:54.170084 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.170089 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.170095 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.170101 | orchestrator | 2025-02-19 09:04:54.170107 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-02-19 09:04:54.170113 | orchestrator | Wednesday 19 February 2025 08:55:53 +0000 (0:00:02.104) 0:06:20.500 **** 2025-02-19 09:04:54.170119 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.170163 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.170170 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.170176 | orchestrator | 2025-02-19 09:04:54.170182 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-02-19 09:04:54.170210 | orchestrator | Wednesday 19 February 2025 08:55:56 +0000 (0:00:03.006) 0:06:23.507 **** 2025-02-19 09:04:54.170217 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.170222 | orchestrator | 2025-02-19 09:04:54.170228 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-02-19 09:04:54.170234 | orchestrator | Wednesday 19 February 2025 08:55:57 +0000 (0:00:01.193) 0:06:24.700 **** 2025-02-19 09:04:54.170240 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-02-19 09:04:54.170246 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.170252 | orchestrator | 2025-02-19 09:04:54.170258 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-02-19 09:04:54.170264 | orchestrator | Wednesday 19 February 2025 08:56:19 +0000 (0:00:21.632) 0:06:46.332 **** 2025-02-19 09:04:54.170270 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.170276 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.170281 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.170287 | orchestrator | 2025-02-19 09:04:54.170293 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-02-19 09:04:54.170303 | orchestrator | Wednesday 19 February 2025 08:56:29 +0000 (0:00:10.381) 0:06:56.714 **** 2025-02-19 09:04:54.170309 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.170315 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.170321 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.170326 | orchestrator | 2025-02-19 09:04:54.170332 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-19 09:04:54.170338 | orchestrator | Wednesday 19 February 2025 08:56:31 +0000 (0:00:01.195) 0:06:57.909 **** 2025-02-19 09:04:54.170344 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.170350 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.170356 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.170362 | orchestrator | 2025-02-19 09:04:54.170368 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-02-19 09:04:54.170374 | orchestrator | Wednesday 19 February 2025 08:56:31 +0000 (0:00:00.602) 0:06:58.512 **** 2025-02-19 09:04:54.170385 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.170391 | orchestrator | 2025-02-19 09:04:54.170397 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-02-19 09:04:54.170403 | orchestrator | Wednesday 19 February 2025 08:56:32 +0000 (0:00:00.685) 0:06:59.198 **** 2025-02-19 09:04:54.170409 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.170414 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.170420 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.170426 | orchestrator | 2025-02-19 09:04:54.170432 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-02-19 09:04:54.170438 | orchestrator | Wednesday 19 February 2025 08:56:32 +0000 (0:00:00.291) 0:06:59.490 **** 2025-02-19 09:04:54.170444 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.170450 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.170456 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.170462 | orchestrator | 2025-02-19 09:04:54.170468 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-02-19 09:04:54.170474 | orchestrator | Wednesday 19 February 2025 08:56:33 +0000 (0:00:01.267) 0:07:00.757 **** 2025-02-19 09:04:54.170480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-19 09:04:54.170486 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-19 09:04:54.170492 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-19 09:04:54.170497 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.170504 | orchestrator | 2025-02-19 09:04:54.170510 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-02-19 09:04:54.170516 | orchestrator | Wednesday 19 February 2025 08:56:34 +0000 (0:00:00.672) 0:07:01.430 **** 2025-02-19 09:04:54.170521 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.170527 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.170533 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.170544 | orchestrator | 2025-02-19 09:04:54.170550 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-19 09:04:54.170555 | orchestrator | Wednesday 19 February 2025 08:56:34 +0000 (0:00:00.315) 0:07:01.745 **** 2025-02-19 09:04:54.170560 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.170566 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.170571 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.170576 | orchestrator | 2025-02-19 09:04:54.170582 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-02-19 09:04:54.170587 | orchestrator | 2025-02-19 09:04:54.170592 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-19 09:04:54.170598 | orchestrator | Wednesday 19 February 2025 08:56:37 +0000 (0:00:02.158) 0:07:03.904 **** 2025-02-19 09:04:54.170603 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.170609 | orchestrator | 2025-02-19 09:04:54.170614 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-19 09:04:54.170620 | orchestrator | Wednesday 19 February 2025 08:56:37 +0000 (0:00:00.803) 0:07:04.708 **** 2025-02-19 09:04:54.170625 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.170630 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.170636 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.170641 | orchestrator | 2025-02-19 09:04:54.170646 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-19 09:04:54.170652 | orchestrator | Wednesday 19 February 2025 08:56:38 +0000 (0:00:00.721) 0:07:05.430 **** 2025-02-19 09:04:54.170657 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.170662 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.170668 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.170673 | orchestrator | 2025-02-19 09:04:54.170678 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-19 09:04:54.170699 | orchestrator | Wednesday 19 February 2025 08:56:38 +0000 (0:00:00.288) 0:07:05.718 **** 2025-02-19 09:04:54.170705 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.170711 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.170716 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.170721 | orchestrator | 2025-02-19 09:04:54.170727 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-19 09:04:54.170732 | orchestrator | Wednesday 19 February 2025 08:56:39 +0000 (0:00:00.523) 0:07:06.241 **** 2025-02-19 09:04:54.170737 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.170788 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.170794 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.170800 | orchestrator | 2025-02-19 09:04:54.170805 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-19 09:04:54.170810 | orchestrator | Wednesday 19 February 2025 08:56:39 +0000 (0:00:00.337) 0:07:06.578 **** 2025-02-19 09:04:54.170816 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.170821 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.170826 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.170831 | orchestrator | 2025-02-19 09:04:54.170851 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-19 09:04:54.170857 | orchestrator | Wednesday 19 February 2025 08:56:40 +0000 (0:00:00.636) 0:07:07.215 **** 2025-02-19 09:04:54.170862 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.170867 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.170872 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.170878 | orchestrator | 2025-02-19 09:04:54.170883 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-19 09:04:54.170892 | orchestrator | Wednesday 19 February 2025 08:56:40 +0000 (0:00:00.304) 0:07:07.519 **** 2025-02-19 09:04:54.170897 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.170902 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.170908 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.170913 | orchestrator | 2025-02-19 09:04:54.170918 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-19 09:04:54.170924 | orchestrator | Wednesday 19 February 2025 08:56:41 +0000 (0:00:00.584) 0:07:08.103 **** 2025-02-19 09:04:54.170929 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.170934 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.170940 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.170945 | orchestrator | 2025-02-19 09:04:54.170950 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-19 09:04:54.170956 | orchestrator | Wednesday 19 February 2025 08:56:41 +0000 (0:00:00.382) 0:07:08.485 **** 2025-02-19 09:04:54.170961 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.170967 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.170976 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.170984 | orchestrator | 2025-02-19 09:04:54.170992 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-19 09:04:54.171000 | orchestrator | Wednesday 19 February 2025 08:56:41 +0000 (0:00:00.306) 0:07:08.792 **** 2025-02-19 09:04:54.171008 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171017 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171026 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171034 | orchestrator | 2025-02-19 09:04:54.171042 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-19 09:04:54.171051 | orchestrator | Wednesday 19 February 2025 08:56:42 +0000 (0:00:00.376) 0:07:09.168 **** 2025-02-19 09:04:54.171060 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.171067 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.171072 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.171077 | orchestrator | 2025-02-19 09:04:54.171082 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-19 09:04:54.171088 | orchestrator | Wednesday 19 February 2025 08:56:43 +0000 (0:00:01.470) 0:07:10.639 **** 2025-02-19 09:04:54.171098 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171103 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171109 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171114 | orchestrator | 2025-02-19 09:04:54.171119 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-19 09:04:54.171139 | orchestrator | Wednesday 19 February 2025 08:56:44 +0000 (0:00:00.450) 0:07:11.089 **** 2025-02-19 09:04:54.171145 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.171150 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.171155 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.171161 | orchestrator | 2025-02-19 09:04:54.171166 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-19 09:04:54.171171 | orchestrator | Wednesday 19 February 2025 08:56:44 +0000 (0:00:00.452) 0:07:11.541 **** 2025-02-19 09:04:54.171177 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171182 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171187 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171193 | orchestrator | 2025-02-19 09:04:54.171198 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-19 09:04:54.171203 | orchestrator | Wednesday 19 February 2025 08:56:45 +0000 (0:00:00.415) 0:07:11.957 **** 2025-02-19 09:04:54.171208 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171214 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171219 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171224 | orchestrator | 2025-02-19 09:04:54.171230 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-19 09:04:54.171235 | orchestrator | Wednesday 19 February 2025 08:56:45 +0000 (0:00:00.820) 0:07:12.778 **** 2025-02-19 09:04:54.171240 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171246 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171251 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171259 | orchestrator | 2025-02-19 09:04:54.171265 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-19 09:04:54.171270 | orchestrator | Wednesday 19 February 2025 08:56:46 +0000 (0:00:00.382) 0:07:13.160 **** 2025-02-19 09:04:54.171276 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171281 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171286 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171292 | orchestrator | 2025-02-19 09:04:54.171297 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-19 09:04:54.171322 | orchestrator | Wednesday 19 February 2025 08:56:46 +0000 (0:00:00.372) 0:07:13.533 **** 2025-02-19 09:04:54.171328 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171334 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171339 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171344 | orchestrator | 2025-02-19 09:04:54.171349 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-19 09:04:54.171355 | orchestrator | Wednesday 19 February 2025 08:56:47 +0000 (0:00:00.408) 0:07:13.942 **** 2025-02-19 09:04:54.171360 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.171365 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.171370 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.171376 | orchestrator | 2025-02-19 09:04:54.171381 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-19 09:04:54.171390 | orchestrator | Wednesday 19 February 2025 08:56:47 +0000 (0:00:00.716) 0:07:14.658 **** 2025-02-19 09:04:54.171399 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.171408 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.171417 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.171427 | orchestrator | 2025-02-19 09:04:54.171436 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-19 09:04:54.171445 | orchestrator | Wednesday 19 February 2025 08:56:48 +0000 (0:00:00.512) 0:07:15.171 **** 2025-02-19 09:04:54.171454 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171469 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171478 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171484 | orchestrator | 2025-02-19 09:04:54.171489 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-19 09:04:54.171495 | orchestrator | Wednesday 19 February 2025 08:56:48 +0000 (0:00:00.414) 0:07:15.585 **** 2025-02-19 09:04:54.171500 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171505 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171511 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171516 | orchestrator | 2025-02-19 09:04:54.171521 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-19 09:04:54.171530 | orchestrator | Wednesday 19 February 2025 08:56:49 +0000 (0:00:00.467) 0:07:16.053 **** 2025-02-19 09:04:54.171535 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171540 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171546 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171551 | orchestrator | 2025-02-19 09:04:54.171556 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-19 09:04:54.171562 | orchestrator | Wednesday 19 February 2025 08:56:49 +0000 (0:00:00.688) 0:07:16.741 **** 2025-02-19 09:04:54.171567 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171572 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171577 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171583 | orchestrator | 2025-02-19 09:04:54.171588 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-19 09:04:54.171593 | orchestrator | Wednesday 19 February 2025 08:56:50 +0000 (0:00:00.389) 0:07:17.131 **** 2025-02-19 09:04:54.171599 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171604 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171609 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171614 | orchestrator | 2025-02-19 09:04:54.171620 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-19 09:04:54.171625 | orchestrator | Wednesday 19 February 2025 08:56:50 +0000 (0:00:00.377) 0:07:17.508 **** 2025-02-19 09:04:54.171630 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171636 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171641 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171646 | orchestrator | 2025-02-19 09:04:54.171652 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-19 09:04:54.171661 | orchestrator | Wednesday 19 February 2025 08:56:51 +0000 (0:00:00.436) 0:07:17.945 **** 2025-02-19 09:04:54.171666 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171671 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171677 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171682 | orchestrator | 2025-02-19 09:04:54.171687 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-19 09:04:54.171693 | orchestrator | Wednesday 19 February 2025 08:56:51 +0000 (0:00:00.858) 0:07:18.803 **** 2025-02-19 09:04:54.171698 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171704 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171709 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171714 | orchestrator | 2025-02-19 09:04:54.171720 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-19 09:04:54.171725 | orchestrator | Wednesday 19 February 2025 08:56:52 +0000 (0:00:00.447) 0:07:19.251 **** 2025-02-19 09:04:54.171731 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171736 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171741 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171746 | orchestrator | 2025-02-19 09:04:54.171752 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-19 09:04:54.171757 | orchestrator | Wednesday 19 February 2025 08:56:52 +0000 (0:00:00.417) 0:07:19.668 **** 2025-02-19 09:04:54.171768 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171773 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171779 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171784 | orchestrator | 2025-02-19 09:04:54.171789 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-19 09:04:54.171795 | orchestrator | Wednesday 19 February 2025 08:56:53 +0000 (0:00:00.525) 0:07:20.194 **** 2025-02-19 09:04:54.171800 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171805 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171811 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171816 | orchestrator | 2025-02-19 09:04:54.171821 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-19 09:04:54.171826 | orchestrator | Wednesday 19 February 2025 08:56:54 +0000 (0:00:00.737) 0:07:20.931 **** 2025-02-19 09:04:54.171832 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171837 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171842 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171848 | orchestrator | 2025-02-19 09:04:54.171869 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-19 09:04:54.171876 | orchestrator | Wednesday 19 February 2025 08:56:54 +0000 (0:00:00.456) 0:07:21.387 **** 2025-02-19 09:04:54.171882 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-19 09:04:54.171887 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-19 09:04:54.171892 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171897 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-19 09:04:54.171903 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-19 09:04:54.171908 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171913 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-19 09:04:54.171918 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-19 09:04:54.171924 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171932 | orchestrator | 2025-02-19 09:04:54.171937 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-19 09:04:54.171942 | orchestrator | Wednesday 19 February 2025 08:56:55 +0000 (0:00:00.506) 0:07:21.894 **** 2025-02-19 09:04:54.171948 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-19 09:04:54.171953 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-19 09:04:54.171958 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.171963 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-19 09:04:54.171969 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-19 09:04:54.171974 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.171980 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-19 09:04:54.171985 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-19 09:04:54.171990 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.171996 | orchestrator | 2025-02-19 09:04:54.172001 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-19 09:04:54.172006 | orchestrator | Wednesday 19 February 2025 08:56:55 +0000 (0:00:00.556) 0:07:22.451 **** 2025-02-19 09:04:54.172011 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172017 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172022 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172027 | orchestrator | 2025-02-19 09:04:54.172033 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-19 09:04:54.172038 | orchestrator | Wednesday 19 February 2025 08:56:56 +0000 (0:00:00.869) 0:07:23.320 **** 2025-02-19 09:04:54.172043 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172048 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172054 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172059 | orchestrator | 2025-02-19 09:04:54.172064 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-19 09:04:54.172074 | orchestrator | Wednesday 19 February 2025 08:56:57 +0000 (0:00:00.555) 0:07:23.876 **** 2025-02-19 09:04:54.172080 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172085 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172090 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172095 | orchestrator | 2025-02-19 09:04:54.172101 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-19 09:04:54.172106 | orchestrator | Wednesday 19 February 2025 08:56:57 +0000 (0:00:00.447) 0:07:24.323 **** 2025-02-19 09:04:54.172111 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172117 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172135 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172142 | orchestrator | 2025-02-19 09:04:54.172147 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-19 09:04:54.172152 | orchestrator | Wednesday 19 February 2025 08:56:57 +0000 (0:00:00.439) 0:07:24.763 **** 2025-02-19 09:04:54.172158 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172163 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172168 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172173 | orchestrator | 2025-02-19 09:04:54.172179 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-19 09:04:54.172184 | orchestrator | Wednesday 19 February 2025 08:56:58 +0000 (0:00:01.014) 0:07:25.778 **** 2025-02-19 09:04:54.172189 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172194 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172200 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172205 | orchestrator | 2025-02-19 09:04:54.172210 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-19 09:04:54.172215 | orchestrator | Wednesday 19 February 2025 08:56:59 +0000 (0:00:00.530) 0:07:26.308 **** 2025-02-19 09:04:54.172221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.172229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.172234 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.172239 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172245 | orchestrator | 2025-02-19 09:04:54.172250 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-19 09:04:54.172255 | orchestrator | Wednesday 19 February 2025 08:56:59 +0000 (0:00:00.478) 0:07:26.786 **** 2025-02-19 09:04:54.172261 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.172266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.172271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.172276 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172282 | orchestrator | 2025-02-19 09:04:54.172287 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-19 09:04:54.172292 | orchestrator | Wednesday 19 February 2025 08:57:00 +0000 (0:00:00.502) 0:07:27.289 **** 2025-02-19 09:04:54.172298 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.172303 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.172308 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.172327 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172333 | orchestrator | 2025-02-19 09:04:54.172339 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.172344 | orchestrator | Wednesday 19 February 2025 08:57:01 +0000 (0:00:00.548) 0:07:27.838 **** 2025-02-19 09:04:54.172349 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172355 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172360 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172365 | orchestrator | 2025-02-19 09:04:54.172370 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-19 09:04:54.172380 | orchestrator | Wednesday 19 February 2025 08:57:01 +0000 (0:00:00.697) 0:07:28.536 **** 2025-02-19 09:04:54.172385 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-19 09:04:54.172390 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172396 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-19 09:04:54.172401 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172406 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-19 09:04:54.172412 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172417 | orchestrator | 2025-02-19 09:04:54.172422 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-19 09:04:54.172427 | orchestrator | Wednesday 19 February 2025 08:57:02 +0000 (0:00:00.636) 0:07:29.172 **** 2025-02-19 09:04:54.172433 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172438 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172443 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172449 | orchestrator | 2025-02-19 09:04:54.172454 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.172459 | orchestrator | Wednesday 19 February 2025 08:57:02 +0000 (0:00:00.371) 0:07:29.544 **** 2025-02-19 09:04:54.172465 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172470 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172475 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172481 | orchestrator | 2025-02-19 09:04:54.172486 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-19 09:04:54.172491 | orchestrator | Wednesday 19 February 2025 08:57:03 +0000 (0:00:00.696) 0:07:30.240 **** 2025-02-19 09:04:54.172496 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-19 09:04:54.172502 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172507 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-19 09:04:54.172512 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172518 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-19 09:04:54.172523 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172528 | orchestrator | 2025-02-19 09:04:54.172534 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-19 09:04:54.172539 | orchestrator | Wednesday 19 February 2025 08:57:05 +0000 (0:00:01.619) 0:07:31.860 **** 2025-02-19 09:04:54.172544 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172550 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172555 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172560 | orchestrator | 2025-02-19 09:04:54.172565 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-19 09:04:54.172571 | orchestrator | Wednesday 19 February 2025 08:57:05 +0000 (0:00:00.420) 0:07:32.280 **** 2025-02-19 09:04:54.172576 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.172581 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.172587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.172592 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172597 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-19 09:04:54.172603 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-19 09:04:54.172608 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-19 09:04:54.172613 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172618 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-19 09:04:54.172624 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-19 09:04:54.172629 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-19 09:04:54.172634 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172639 | orchestrator | 2025-02-19 09:04:54.172645 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-19 09:04:54.172650 | orchestrator | Wednesday 19 February 2025 08:57:06 +0000 (0:00:00.739) 0:07:33.020 **** 2025-02-19 09:04:54.172658 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172664 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172669 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172674 | orchestrator | 2025-02-19 09:04:54.172680 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-19 09:04:54.172685 | orchestrator | Wednesday 19 February 2025 08:57:07 +0000 (0:00:00.914) 0:07:33.935 **** 2025-02-19 09:04:54.172690 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172695 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172701 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172706 | orchestrator | 2025-02-19 09:04:54.172711 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-19 09:04:54.172717 | orchestrator | Wednesday 19 February 2025 08:57:07 +0000 (0:00:00.648) 0:07:34.583 **** 2025-02-19 09:04:54.172722 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172733 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172738 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172744 | orchestrator | 2025-02-19 09:04:54.172749 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-19 09:04:54.172754 | orchestrator | Wednesday 19 February 2025 08:57:08 +0000 (0:00:01.011) 0:07:35.595 **** 2025-02-19 09:04:54.172760 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172765 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172770 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172775 | orchestrator | 2025-02-19 09:04:54.172781 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-02-19 09:04:54.172798 | orchestrator | Wednesday 19 February 2025 08:57:09 +0000 (0:00:00.608) 0:07:36.203 **** 2025-02-19 09:04:54.172804 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-19 09:04:54.172809 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-19 09:04:54.172814 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-19 09:04:54.172820 | orchestrator | 2025-02-19 09:04:54.172825 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-02-19 09:04:54.172830 | orchestrator | Wednesday 19 February 2025 08:57:10 +0000 (0:00:01.099) 0:07:37.303 **** 2025-02-19 09:04:54.172835 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.172841 | orchestrator | 2025-02-19 09:04:54.172850 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-02-19 09:04:54.172860 | orchestrator | Wednesday 19 February 2025 08:57:11 +0000 (0:00:01.130) 0:07:38.434 **** 2025-02-19 09:04:54.172870 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.172880 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.172889 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.172899 | orchestrator | 2025-02-19 09:04:54.172908 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-02-19 09:04:54.172918 | orchestrator | Wednesday 19 February 2025 08:57:12 +0000 (0:00:00.790) 0:07:39.225 **** 2025-02-19 09:04:54.172924 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.172929 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.172934 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.172939 | orchestrator | 2025-02-19 09:04:54.172945 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-02-19 09:04:54.172950 | orchestrator | Wednesday 19 February 2025 08:57:12 +0000 (0:00:00.378) 0:07:39.603 **** 2025-02-19 09:04:54.172955 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-19 09:04:54.172960 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-19 09:04:54.172966 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-19 09:04:54.172971 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-02-19 09:04:54.172976 | orchestrator | 2025-02-19 09:04:54.172986 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-02-19 09:04:54.172991 | orchestrator | Wednesday 19 February 2025 08:57:22 +0000 (0:00:09.235) 0:07:48.839 **** 2025-02-19 09:04:54.172996 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.173002 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.173007 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.173012 | orchestrator | 2025-02-19 09:04:54.173017 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-02-19 09:04:54.173023 | orchestrator | Wednesday 19 February 2025 08:57:22 +0000 (0:00:00.462) 0:07:49.301 **** 2025-02-19 09:04:54.173028 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-02-19 09:04:54.173033 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-19 09:04:54.173039 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-19 09:04:54.173044 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-02-19 09:04:54.173049 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:04:54.173055 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:04:54.173060 | orchestrator | 2025-02-19 09:04:54.173065 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-02-19 09:04:54.173071 | orchestrator | Wednesday 19 February 2025 08:57:25 +0000 (0:00:02.720) 0:07:52.022 **** 2025-02-19 09:04:54.173076 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-02-19 09:04:54.173081 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-19 09:04:54.173087 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-19 09:04:54.173092 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-19 09:04:54.173097 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-02-19 09:04:54.173103 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-02-19 09:04:54.173108 | orchestrator | 2025-02-19 09:04:54.173113 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-02-19 09:04:54.173118 | orchestrator | Wednesday 19 February 2025 08:57:26 +0000 (0:00:01.373) 0:07:53.396 **** 2025-02-19 09:04:54.173153 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.173159 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.173164 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.173170 | orchestrator | 2025-02-19 09:04:54.173175 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-02-19 09:04:54.173180 | orchestrator | Wednesday 19 February 2025 08:57:27 +0000 (0:00:00.900) 0:07:54.296 **** 2025-02-19 09:04:54.173186 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.173191 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.173196 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.173202 | orchestrator | 2025-02-19 09:04:54.173207 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-02-19 09:04:54.173212 | orchestrator | Wednesday 19 February 2025 08:57:28 +0000 (0:00:00.642) 0:07:54.939 **** 2025-02-19 09:04:54.173218 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.173223 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.173228 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.173233 | orchestrator | 2025-02-19 09:04:54.173239 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-02-19 09:04:54.173244 | orchestrator | Wednesday 19 February 2025 08:57:28 +0000 (0:00:00.450) 0:07:55.390 **** 2025-02-19 09:04:54.173249 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.173255 | orchestrator | 2025-02-19 09:04:54.173260 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-02-19 09:04:54.173265 | orchestrator | Wednesday 19 February 2025 08:57:29 +0000 (0:00:00.712) 0:07:56.102 **** 2025-02-19 09:04:54.173271 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.173276 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.173298 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.173308 | orchestrator | 2025-02-19 09:04:54.173314 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-02-19 09:04:54.173319 | orchestrator | Wednesday 19 February 2025 08:57:29 +0000 (0:00:00.677) 0:07:56.780 **** 2025-02-19 09:04:54.173324 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.173329 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.173335 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.173340 | orchestrator | 2025-02-19 09:04:54.173345 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-02-19 09:04:54.173351 | orchestrator | Wednesday 19 February 2025 08:57:30 +0000 (0:00:00.396) 0:07:57.176 **** 2025-02-19 09:04:54.173356 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.173361 | orchestrator | 2025-02-19 09:04:54.173367 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-02-19 09:04:54.173372 | orchestrator | Wednesday 19 February 2025 08:57:30 +0000 (0:00:00.611) 0:07:57.788 **** 2025-02-19 09:04:54.173377 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.173382 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.173387 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.173393 | orchestrator | 2025-02-19 09:04:54.173398 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-02-19 09:04:54.173406 | orchestrator | Wednesday 19 February 2025 08:57:32 +0000 (0:00:01.763) 0:07:59.552 **** 2025-02-19 09:04:54.173412 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.173417 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.173422 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.173427 | orchestrator | 2025-02-19 09:04:54.173432 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-02-19 09:04:54.173438 | orchestrator | Wednesday 19 February 2025 08:57:34 +0000 (0:00:01.369) 0:08:00.921 **** 2025-02-19 09:04:54.173443 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.173448 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.173454 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.173459 | orchestrator | 2025-02-19 09:04:54.173464 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-02-19 09:04:54.173469 | orchestrator | Wednesday 19 February 2025 08:57:35 +0000 (0:00:01.855) 0:08:02.776 **** 2025-02-19 09:04:54.173475 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.173480 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.173485 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.173491 | orchestrator | 2025-02-19 09:04:54.173496 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-02-19 09:04:54.173501 | orchestrator | Wednesday 19 February 2025 08:57:38 +0000 (0:00:02.574) 0:08:05.351 **** 2025-02-19 09:04:54.173507 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.173512 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.173517 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-02-19 09:04:54.173523 | orchestrator | 2025-02-19 09:04:54.173528 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-02-19 09:04:54.173533 | orchestrator | Wednesday 19 February 2025 08:57:39 +0000 (0:00:00.673) 0:08:06.025 **** 2025-02-19 09:04:54.173539 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-02-19 09:04:54.173544 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-02-19 09:04:54.173549 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-02-19 09:04:54.173555 | orchestrator | 2025-02-19 09:04:54.173560 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-02-19 09:04:54.173565 | orchestrator | Wednesday 19 February 2025 08:57:53 +0000 (0:00:13.783) 0:08:19.808 **** 2025-02-19 09:04:54.173571 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-02-19 09:04:54.173579 | orchestrator | 2025-02-19 09:04:54.173585 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-02-19 09:04:54.173590 | orchestrator | Wednesday 19 February 2025 08:57:54 +0000 (0:00:01.818) 0:08:21.627 **** 2025-02-19 09:04:54.173595 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.173601 | orchestrator | 2025-02-19 09:04:54.173606 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-02-19 09:04:54.173611 | orchestrator | Wednesday 19 February 2025 08:57:55 +0000 (0:00:00.652) 0:08:22.280 **** 2025-02-19 09:04:54.173616 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.173622 | orchestrator | 2025-02-19 09:04:54.173627 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-02-19 09:04:54.173632 | orchestrator | Wednesday 19 February 2025 08:57:55 +0000 (0:00:00.365) 0:08:22.645 **** 2025-02-19 09:04:54.173637 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-02-19 09:04:54.173643 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-02-19 09:04:54.173648 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-02-19 09:04:54.173653 | orchestrator | 2025-02-19 09:04:54.173658 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-02-19 09:04:54.173664 | orchestrator | Wednesday 19 February 2025 08:58:02 +0000 (0:00:06.553) 0:08:29.199 **** 2025-02-19 09:04:54.173669 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-02-19 09:04:54.173674 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-02-19 09:04:54.173678 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-02-19 09:04:54.173683 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-02-19 09:04:54.173688 | orchestrator | 2025-02-19 09:04:54.173693 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-19 09:04:54.173709 | orchestrator | Wednesday 19 February 2025 08:58:07 +0000 (0:00:05.432) 0:08:34.632 **** 2025-02-19 09:04:54.173715 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.173720 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.173725 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.173730 | orchestrator | 2025-02-19 09:04:54.173735 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-19 09:04:54.173740 | orchestrator | Wednesday 19 February 2025 08:58:08 +0000 (0:00:01.008) 0:08:35.640 **** 2025-02-19 09:04:54.173744 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:04:54.173749 | orchestrator | 2025-02-19 09:04:54.173754 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-02-19 09:04:54.173759 | orchestrator | Wednesday 19 February 2025 08:58:09 +0000 (0:00:00.936) 0:08:36.577 **** 2025-02-19 09:04:54.173764 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.173771 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.173776 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.173781 | orchestrator | 2025-02-19 09:04:54.173786 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-19 09:04:54.173793 | orchestrator | Wednesday 19 February 2025 08:58:10 +0000 (0:00:00.470) 0:08:37.048 **** 2025-02-19 09:04:54.173798 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.173803 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.173807 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.173812 | orchestrator | 2025-02-19 09:04:54.173817 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-02-19 09:04:54.173822 | orchestrator | Wednesday 19 February 2025 08:58:11 +0000 (0:00:01.316) 0:08:38.365 **** 2025-02-19 09:04:54.173827 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-19 09:04:54.173832 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-19 09:04:54.173839 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-19 09:04:54.173844 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.173852 | orchestrator | 2025-02-19 09:04:54.173856 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-02-19 09:04:54.173861 | orchestrator | Wednesday 19 February 2025 08:58:12 +0000 (0:00:01.280) 0:08:39.645 **** 2025-02-19 09:04:54.173866 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.173871 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.173876 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.173881 | orchestrator | 2025-02-19 09:04:54.173886 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-19 09:04:54.173890 | orchestrator | Wednesday 19 February 2025 08:58:13 +0000 (0:00:00.540) 0:08:40.186 **** 2025-02-19 09:04:54.173895 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.173900 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.173905 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.173909 | orchestrator | 2025-02-19 09:04:54.173914 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-02-19 09:04:54.173919 | orchestrator | 2025-02-19 09:04:54.173924 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-19 09:04:54.173929 | orchestrator | Wednesday 19 February 2025 08:58:15 +0000 (0:00:02.580) 0:08:42.767 **** 2025-02-19 09:04:54.173934 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.173939 | orchestrator | 2025-02-19 09:04:54.173943 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-19 09:04:54.173948 | orchestrator | Wednesday 19 February 2025 08:58:16 +0000 (0:00:00.874) 0:08:43.641 **** 2025-02-19 09:04:54.173953 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.173958 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.173963 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.173968 | orchestrator | 2025-02-19 09:04:54.173972 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-19 09:04:54.173977 | orchestrator | Wednesday 19 February 2025 08:58:17 +0000 (0:00:00.411) 0:08:44.052 **** 2025-02-19 09:04:54.173982 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.173987 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.173992 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.173996 | orchestrator | 2025-02-19 09:04:54.174001 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-19 09:04:54.174006 | orchestrator | Wednesday 19 February 2025 08:58:18 +0000 (0:00:00.843) 0:08:44.895 **** 2025-02-19 09:04:54.174011 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.174037 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.174046 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.174053 | orchestrator | 2025-02-19 09:04:54.174062 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-19 09:04:54.174070 | orchestrator | Wednesday 19 February 2025 08:58:19 +0000 (0:00:01.168) 0:08:46.064 **** 2025-02-19 09:04:54.174078 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.174087 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.174095 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.174103 | orchestrator | 2025-02-19 09:04:54.174111 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-19 09:04:54.174119 | orchestrator | Wednesday 19 February 2025 08:58:20 +0000 (0:00:00.837) 0:08:46.902 **** 2025-02-19 09:04:54.174148 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174156 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174163 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174170 | orchestrator | 2025-02-19 09:04:54.174177 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-19 09:04:54.174186 | orchestrator | Wednesday 19 February 2025 08:58:20 +0000 (0:00:00.364) 0:08:47.266 **** 2025-02-19 09:04:54.174191 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174200 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174205 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174210 | orchestrator | 2025-02-19 09:04:54.174215 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-19 09:04:54.174221 | orchestrator | Wednesday 19 February 2025 08:58:20 +0000 (0:00:00.360) 0:08:47.627 **** 2025-02-19 09:04:54.174228 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174257 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174266 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174273 | orchestrator | 2025-02-19 09:04:54.174280 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-19 09:04:54.174287 | orchestrator | Wednesday 19 February 2025 08:58:21 +0000 (0:00:00.673) 0:08:48.300 **** 2025-02-19 09:04:54.174294 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174302 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174310 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174318 | orchestrator | 2025-02-19 09:04:54.174326 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-19 09:04:54.174333 | orchestrator | Wednesday 19 February 2025 08:58:21 +0000 (0:00:00.371) 0:08:48.671 **** 2025-02-19 09:04:54.174338 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174343 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174348 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174353 | orchestrator | 2025-02-19 09:04:54.174358 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-19 09:04:54.174362 | orchestrator | Wednesday 19 February 2025 08:58:22 +0000 (0:00:00.390) 0:08:49.062 **** 2025-02-19 09:04:54.174367 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174372 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174377 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174382 | orchestrator | 2025-02-19 09:04:54.174387 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-19 09:04:54.174391 | orchestrator | Wednesday 19 February 2025 08:58:22 +0000 (0:00:00.344) 0:08:49.406 **** 2025-02-19 09:04:54.174396 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.174401 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.174406 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.174411 | orchestrator | 2025-02-19 09:04:54.174420 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-19 09:04:54.174425 | orchestrator | Wednesday 19 February 2025 08:58:23 +0000 (0:00:01.218) 0:08:50.625 **** 2025-02-19 09:04:54.174429 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174434 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174439 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174444 | orchestrator | 2025-02-19 09:04:54.174449 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-19 09:04:54.174454 | orchestrator | Wednesday 19 February 2025 08:58:24 +0000 (0:00:00.397) 0:08:51.022 **** 2025-02-19 09:04:54.174458 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174463 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174468 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174475 | orchestrator | 2025-02-19 09:04:54.174480 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-19 09:04:54.174485 | orchestrator | Wednesday 19 February 2025 08:58:24 +0000 (0:00:00.430) 0:08:51.453 **** 2025-02-19 09:04:54.174490 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.174495 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.174500 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.174505 | orchestrator | 2025-02-19 09:04:54.174510 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-19 09:04:54.174514 | orchestrator | Wednesday 19 February 2025 08:58:24 +0000 (0:00:00.330) 0:08:51.783 **** 2025-02-19 09:04:54.174519 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.174524 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.174534 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.174538 | orchestrator | 2025-02-19 09:04:54.174543 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-19 09:04:54.174548 | orchestrator | Wednesday 19 February 2025 08:58:25 +0000 (0:00:00.801) 0:08:52.584 **** 2025-02-19 09:04:54.174553 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.174558 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.174563 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.174567 | orchestrator | 2025-02-19 09:04:54.174572 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-19 09:04:54.174577 | orchestrator | Wednesday 19 February 2025 08:58:26 +0000 (0:00:00.390) 0:08:52.975 **** 2025-02-19 09:04:54.174582 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174587 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174592 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174597 | orchestrator | 2025-02-19 09:04:54.174602 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-19 09:04:54.174606 | orchestrator | Wednesday 19 February 2025 08:58:26 +0000 (0:00:00.360) 0:08:53.336 **** 2025-02-19 09:04:54.174611 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174616 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174621 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174626 | orchestrator | 2025-02-19 09:04:54.174631 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-19 09:04:54.174636 | orchestrator | Wednesday 19 February 2025 08:58:26 +0000 (0:00:00.356) 0:08:53.692 **** 2025-02-19 09:04:54.174641 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174645 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174650 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174655 | orchestrator | 2025-02-19 09:04:54.174660 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-19 09:04:54.174665 | orchestrator | Wednesday 19 February 2025 08:58:27 +0000 (0:00:00.780) 0:08:54.473 **** 2025-02-19 09:04:54.174670 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.174675 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.174683 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.174688 | orchestrator | 2025-02-19 09:04:54.174693 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-19 09:04:54.174698 | orchestrator | Wednesday 19 February 2025 08:58:28 +0000 (0:00:00.420) 0:08:54.894 **** 2025-02-19 09:04:54.174703 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174708 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174713 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174717 | orchestrator | 2025-02-19 09:04:54.174722 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-19 09:04:54.174727 | orchestrator | Wednesday 19 February 2025 08:58:28 +0000 (0:00:00.662) 0:08:55.556 **** 2025-02-19 09:04:54.174732 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174737 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174756 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174761 | orchestrator | 2025-02-19 09:04:54.174766 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-19 09:04:54.174771 | orchestrator | Wednesday 19 February 2025 08:58:29 +0000 (0:00:00.357) 0:08:55.913 **** 2025-02-19 09:04:54.174776 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174781 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174786 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174790 | orchestrator | 2025-02-19 09:04:54.174795 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-19 09:04:54.174800 | orchestrator | Wednesday 19 February 2025 08:58:29 +0000 (0:00:00.695) 0:08:56.609 **** 2025-02-19 09:04:54.174805 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174810 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174815 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174820 | orchestrator | 2025-02-19 09:04:54.174830 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-19 09:04:54.174835 | orchestrator | Wednesday 19 February 2025 08:58:30 +0000 (0:00:00.372) 0:08:56.981 **** 2025-02-19 09:04:54.174840 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174845 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174850 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174855 | orchestrator | 2025-02-19 09:04:54.174859 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-19 09:04:54.174864 | orchestrator | Wednesday 19 February 2025 08:58:30 +0000 (0:00:00.496) 0:08:57.477 **** 2025-02-19 09:04:54.174869 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174874 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174879 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174884 | orchestrator | 2025-02-19 09:04:54.174888 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-19 09:04:54.174893 | orchestrator | Wednesday 19 February 2025 08:58:31 +0000 (0:00:00.339) 0:08:57.817 **** 2025-02-19 09:04:54.174898 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174903 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174908 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174913 | orchestrator | 2025-02-19 09:04:54.174918 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-19 09:04:54.174925 | orchestrator | Wednesday 19 February 2025 08:58:31 +0000 (0:00:00.692) 0:08:58.510 **** 2025-02-19 09:04:54.174930 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174935 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174940 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174945 | orchestrator | 2025-02-19 09:04:54.174950 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-19 09:04:54.174955 | orchestrator | Wednesday 19 February 2025 08:58:32 +0000 (0:00:00.419) 0:08:58.929 **** 2025-02-19 09:04:54.174960 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174965 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174970 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.174974 | orchestrator | 2025-02-19 09:04:54.174979 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-19 09:04:54.174984 | orchestrator | Wednesday 19 February 2025 08:58:32 +0000 (0:00:00.349) 0:08:59.278 **** 2025-02-19 09:04:54.174989 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.174994 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.174999 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175006 | orchestrator | 2025-02-19 09:04:54.175011 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-19 09:04:54.175016 | orchestrator | Wednesday 19 February 2025 08:58:32 +0000 (0:00:00.376) 0:08:59.655 **** 2025-02-19 09:04:54.175021 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175026 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175031 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175036 | orchestrator | 2025-02-19 09:04:54.175041 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-19 09:04:54.175046 | orchestrator | Wednesday 19 February 2025 08:58:33 +0000 (0:00:00.680) 0:09:00.336 **** 2025-02-19 09:04:54.175050 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175055 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175060 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175065 | orchestrator | 2025-02-19 09:04:54.175070 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-19 09:04:54.175075 | orchestrator | Wednesday 19 February 2025 08:58:34 +0000 (0:00:00.554) 0:09:00.890 **** 2025-02-19 09:04:54.175080 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.175084 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.175093 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.175097 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.175102 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175107 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175112 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.175117 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.175133 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175139 | orchestrator | 2025-02-19 09:04:54.175144 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-19 09:04:54.175149 | orchestrator | Wednesday 19 February 2025 08:58:34 +0000 (0:00:00.470) 0:09:01.360 **** 2025-02-19 09:04:54.175154 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-19 09:04:54.175159 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-19 09:04:54.175164 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175169 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-19 09:04:54.175174 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-19 09:04:54.175179 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175183 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-19 09:04:54.175201 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-19 09:04:54.175206 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175211 | orchestrator | 2025-02-19 09:04:54.175216 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-19 09:04:54.175221 | orchestrator | Wednesday 19 February 2025 08:58:35 +0000 (0:00:00.493) 0:09:01.854 **** 2025-02-19 09:04:54.175226 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175231 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175235 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175240 | orchestrator | 2025-02-19 09:04:54.175245 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-19 09:04:54.175250 | orchestrator | Wednesday 19 February 2025 08:58:35 +0000 (0:00:00.710) 0:09:02.565 **** 2025-02-19 09:04:54.175254 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175259 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175264 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175269 | orchestrator | 2025-02-19 09:04:54.175274 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-19 09:04:54.175279 | orchestrator | Wednesday 19 February 2025 08:58:36 +0000 (0:00:00.392) 0:09:02.958 **** 2025-02-19 09:04:54.175283 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175288 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175293 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175298 | orchestrator | 2025-02-19 09:04:54.175303 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-19 09:04:54.175307 | orchestrator | Wednesday 19 February 2025 08:58:36 +0000 (0:00:00.394) 0:09:03.352 **** 2025-02-19 09:04:54.175312 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175317 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175322 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175327 | orchestrator | 2025-02-19 09:04:54.175331 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-19 09:04:54.175336 | orchestrator | Wednesday 19 February 2025 08:58:36 +0000 (0:00:00.342) 0:09:03.694 **** 2025-02-19 09:04:54.175341 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175346 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175351 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175355 | orchestrator | 2025-02-19 09:04:54.175360 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-19 09:04:54.175368 | orchestrator | Wednesday 19 February 2025 08:58:37 +0000 (0:00:00.669) 0:09:04.364 **** 2025-02-19 09:04:54.175382 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175390 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175398 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175406 | orchestrator | 2025-02-19 09:04:54.175414 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-19 09:04:54.175422 | orchestrator | Wednesday 19 February 2025 08:58:37 +0000 (0:00:00.410) 0:09:04.775 **** 2025-02-19 09:04:54.175430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.175438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.175446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.175455 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175464 | orchestrator | 2025-02-19 09:04:54.175472 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-19 09:04:54.175480 | orchestrator | Wednesday 19 February 2025 08:58:38 +0000 (0:00:00.696) 0:09:05.471 **** 2025-02-19 09:04:54.175485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.175490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.175495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.175500 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175505 | orchestrator | 2025-02-19 09:04:54.175509 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-19 09:04:54.175514 | orchestrator | Wednesday 19 February 2025 08:58:39 +0000 (0:00:00.465) 0:09:05.937 **** 2025-02-19 09:04:54.175519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.175524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.175528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.175533 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175538 | orchestrator | 2025-02-19 09:04:54.175543 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.175548 | orchestrator | Wednesday 19 February 2025 08:58:39 +0000 (0:00:00.476) 0:09:06.413 **** 2025-02-19 09:04:54.175552 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175557 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175562 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175567 | orchestrator | 2025-02-19 09:04:54.175571 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-19 09:04:54.175576 | orchestrator | Wednesday 19 February 2025 08:58:40 +0000 (0:00:00.756) 0:09:07.170 **** 2025-02-19 09:04:54.175583 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.175595 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175602 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.175610 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175617 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.175625 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175633 | orchestrator | 2025-02-19 09:04:54.175641 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-19 09:04:54.175649 | orchestrator | Wednesday 19 February 2025 08:58:41 +0000 (0:00:00.637) 0:09:07.807 **** 2025-02-19 09:04:54.175656 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175661 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175665 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175670 | orchestrator | 2025-02-19 09:04:54.175675 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.175696 | orchestrator | Wednesday 19 February 2025 08:58:41 +0000 (0:00:00.397) 0:09:08.205 **** 2025-02-19 09:04:54.175701 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175706 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175711 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175716 | orchestrator | 2025-02-19 09:04:54.175721 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-19 09:04:54.175730 | orchestrator | Wednesday 19 February 2025 08:58:41 +0000 (0:00:00.374) 0:09:08.579 **** 2025-02-19 09:04:54.175735 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.175740 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175745 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.175750 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175754 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.175759 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175764 | orchestrator | 2025-02-19 09:04:54.175769 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-19 09:04:54.175774 | orchestrator | Wednesday 19 February 2025 08:58:42 +0000 (0:00:01.125) 0:09:09.705 **** 2025-02-19 09:04:54.175779 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.175784 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175788 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.175793 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175798 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.175803 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175808 | orchestrator | 2025-02-19 09:04:54.175813 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-19 09:04:54.175818 | orchestrator | Wednesday 19 February 2025 08:58:43 +0000 (0:00:00.463) 0:09:10.169 **** 2025-02-19 09:04:54.175822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.175827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.175832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.175837 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-19 09:04:54.175842 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-19 09:04:54.175846 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-19 09:04:54.175851 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175860 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175865 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-19 09:04:54.175870 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-19 09:04:54.175875 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-19 09:04:54.175880 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175885 | orchestrator | 2025-02-19 09:04:54.175890 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-19 09:04:54.175895 | orchestrator | Wednesday 19 February 2025 08:58:44 +0000 (0:00:00.833) 0:09:11.002 **** 2025-02-19 09:04:54.175900 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175905 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175909 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175914 | orchestrator | 2025-02-19 09:04:54.175919 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-19 09:04:54.175924 | orchestrator | Wednesday 19 February 2025 08:58:45 +0000 (0:00:00.942) 0:09:11.944 **** 2025-02-19 09:04:54.175929 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-19 09:04:54.175934 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175941 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-19 09:04:54.175946 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175951 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-19 09:04:54.175956 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175961 | orchestrator | 2025-02-19 09:04:54.175966 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-19 09:04:54.175973 | orchestrator | Wednesday 19 February 2025 08:58:45 +0000 (0:00:00.718) 0:09:12.663 **** 2025-02-19 09:04:54.175978 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.175983 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.175988 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.175993 | orchestrator | 2025-02-19 09:04:54.175998 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-19 09:04:54.176003 | orchestrator | Wednesday 19 February 2025 08:58:46 +0000 (0:00:00.953) 0:09:13.616 **** 2025-02-19 09:04:54.176008 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.176012 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.176017 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.176022 | orchestrator | 2025-02-19 09:04:54.176027 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-02-19 09:04:54.176032 | orchestrator | Wednesday 19 February 2025 08:58:47 +0000 (0:00:00.685) 0:09:14.301 **** 2025-02-19 09:04:54.176037 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.176041 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.176046 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.176051 | orchestrator | 2025-02-19 09:04:54.176056 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-02-19 09:04:54.176061 | orchestrator | Wednesday 19 February 2025 08:58:48 +0000 (0:00:00.715) 0:09:15.017 **** 2025-02-19 09:04:54.176066 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-19 09:04:54.176071 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-19 09:04:54.176076 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-19 09:04:54.176080 | orchestrator | 2025-02-19 09:04:54.176096 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-02-19 09:04:54.176102 | orchestrator | Wednesday 19 February 2025 08:58:49 +0000 (0:00:01.053) 0:09:16.070 **** 2025-02-19 09:04:54.176107 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.176112 | orchestrator | 2025-02-19 09:04:54.176116 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-02-19 09:04:54.176121 | orchestrator | Wednesday 19 February 2025 08:58:49 +0000 (0:00:00.679) 0:09:16.750 **** 2025-02-19 09:04:54.176156 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.176161 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.176166 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.176171 | orchestrator | 2025-02-19 09:04:54.176176 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-02-19 09:04:54.176181 | orchestrator | Wednesday 19 February 2025 08:58:50 +0000 (0:00:00.357) 0:09:17.108 **** 2025-02-19 09:04:54.176186 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.176190 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.176195 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.176200 | orchestrator | 2025-02-19 09:04:54.176205 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-02-19 09:04:54.176212 | orchestrator | Wednesday 19 February 2025 08:58:51 +0000 (0:00:00.729) 0:09:17.838 **** 2025-02-19 09:04:54.176217 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.176222 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.176227 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.176231 | orchestrator | 2025-02-19 09:04:54.176236 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-02-19 09:04:54.176241 | orchestrator | Wednesday 19 February 2025 08:58:51 +0000 (0:00:00.386) 0:09:18.225 **** 2025-02-19 09:04:54.176246 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.176251 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.176255 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.176263 | orchestrator | 2025-02-19 09:04:54.176268 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-02-19 09:04:54.176273 | orchestrator | Wednesday 19 February 2025 08:58:51 +0000 (0:00:00.329) 0:09:18.554 **** 2025-02-19 09:04:54.176278 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.176283 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.176287 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.176292 | orchestrator | 2025-02-19 09:04:54.176297 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-02-19 09:04:54.176302 | orchestrator | Wednesday 19 February 2025 08:58:52 +0000 (0:00:00.748) 0:09:19.302 **** 2025-02-19 09:04:54.176307 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.176311 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.176316 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.176321 | orchestrator | 2025-02-19 09:04:54.176326 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-02-19 09:04:54.176330 | orchestrator | Wednesday 19 February 2025 08:58:53 +0000 (0:00:00.766) 0:09:20.069 **** 2025-02-19 09:04:54.176336 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-19 09:04:54.176343 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-19 09:04:54.176348 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-19 09:04:54.176353 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-19 09:04:54.176358 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-19 09:04:54.176365 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-19 09:04:54.176370 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-19 09:04:54.176375 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-19 09:04:54.176380 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-19 09:04:54.176385 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-19 09:04:54.176390 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-19 09:04:54.176394 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-19 09:04:54.176399 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-19 09:04:54.176404 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-19 09:04:54.176409 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-19 09:04:54.176414 | orchestrator | 2025-02-19 09:04:54.176419 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-02-19 09:04:54.176423 | orchestrator | Wednesday 19 February 2025 08:58:56 +0000 (0:00:03.452) 0:09:23.521 **** 2025-02-19 09:04:54.176428 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.176433 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.176438 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.176443 | orchestrator | 2025-02-19 09:04:54.176448 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-02-19 09:04:54.176452 | orchestrator | Wednesday 19 February 2025 08:58:57 +0000 (0:00:00.428) 0:09:23.950 **** 2025-02-19 09:04:54.176457 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.176462 | orchestrator | 2025-02-19 09:04:54.176467 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-02-19 09:04:54.176485 | orchestrator | Wednesday 19 February 2025 08:58:58 +0000 (0:00:01.006) 0:09:24.956 **** 2025-02-19 09:04:54.176491 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-19 09:04:54.176499 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-19 09:04:54.176504 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-19 09:04:54.176509 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-02-19 09:04:54.176514 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-02-19 09:04:54.176519 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-02-19 09:04:54.176524 | orchestrator | 2025-02-19 09:04:54.176528 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-02-19 09:04:54.176533 | orchestrator | Wednesday 19 February 2025 08:58:59 +0000 (0:00:01.252) 0:09:26.208 **** 2025-02-19 09:04:54.176538 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:04:54.176543 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-19 09:04:54.176548 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-19 09:04:54.176553 | orchestrator | 2025-02-19 09:04:54.176557 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-02-19 09:04:54.176562 | orchestrator | Wednesday 19 February 2025 08:59:01 +0000 (0:00:02.089) 0:09:28.297 **** 2025-02-19 09:04:54.176567 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-19 09:04:54.176572 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-19 09:04:54.176577 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.176581 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-19 09:04:54.176586 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-19 09:04:54.176591 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.176596 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-19 09:04:54.176601 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-19 09:04:54.176605 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.176610 | orchestrator | 2025-02-19 09:04:54.176615 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-02-19 09:04:54.176620 | orchestrator | Wednesday 19 February 2025 08:59:03 +0000 (0:00:01.570) 0:09:29.868 **** 2025-02-19 09:04:54.176625 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-19 09:04:54.176630 | orchestrator | 2025-02-19 09:04:54.176635 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-02-19 09:04:54.176642 | orchestrator | Wednesday 19 February 2025 08:59:05 +0000 (0:00:02.723) 0:09:32.592 **** 2025-02-19 09:04:54.176647 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.176652 | orchestrator | 2025-02-19 09:04:54.176657 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-02-19 09:04:54.176662 | orchestrator | Wednesday 19 February 2025 08:59:06 +0000 (0:00:00.594) 0:09:33.186 **** 2025-02-19 09:04:54.176666 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.176671 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.176676 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.176681 | orchestrator | 2025-02-19 09:04:54.176686 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-02-19 09:04:54.176691 | orchestrator | Wednesday 19 February 2025 08:59:07 +0000 (0:00:00.725) 0:09:33.912 **** 2025-02-19 09:04:54.176695 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.176700 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.176705 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.176710 | orchestrator | 2025-02-19 09:04:54.176714 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-02-19 09:04:54.176719 | orchestrator | Wednesday 19 February 2025 08:59:07 +0000 (0:00:00.343) 0:09:34.256 **** 2025-02-19 09:04:54.176724 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.176729 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.176739 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.176746 | orchestrator | 2025-02-19 09:04:54.176751 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-02-19 09:04:54.176756 | orchestrator | Wednesday 19 February 2025 08:59:07 +0000 (0:00:00.328) 0:09:34.584 **** 2025-02-19 09:04:54.176761 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.176766 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.176770 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.176775 | orchestrator | 2025-02-19 09:04:54.176780 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-02-19 09:04:54.176785 | orchestrator | Wednesday 19 February 2025 08:59:08 +0000 (0:00:00.336) 0:09:34.920 **** 2025-02-19 09:04:54.176790 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.176795 | orchestrator | 2025-02-19 09:04:54.176800 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-02-19 09:04:54.176804 | orchestrator | Wednesday 19 February 2025 08:59:09 +0000 (0:00:00.911) 0:09:35.832 **** 2025-02-19 09:04:54.176809 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3ffe4904-1899-5051-bec6-9b9e5f20cdb9', 'data_vg': 'ceph-3ffe4904-1899-5051-bec6-9b9e5f20cdb9'}) 2025-02-19 09:04:54.176816 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-118242ed-6ea1-54c4-bfaa-1565dde441bc', 'data_vg': 'ceph-118242ed-6ea1-54c4-bfaa-1565dde441bc'}) 2025-02-19 09:04:54.176821 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-45b4b457-0c8f-5565-8330-30b761ce6399', 'data_vg': 'ceph-45b4b457-0c8f-5565-8330-30b761ce6399'}) 2025-02-19 09:04:54.176838 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bbf6aa6c-a724-5ce6-b507-3cef42d33bac', 'data_vg': 'ceph-bbf6aa6c-a724-5ce6-b507-3cef42d33bac'}) 2025-02-19 09:04:54.176843 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f77e8fc9-ceed-59c4-8328-4d335fb6ee54', 'data_vg': 'ceph-f77e8fc9-ceed-59c4-8328-4d335fb6ee54'}) 2025-02-19 09:04:54.176848 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-185b0f4c-91cb-52bd-aac1-e01f69de71f3', 'data_vg': 'ceph-185b0f4c-91cb-52bd-aac1-e01f69de71f3'}) 2025-02-19 09:04:54.176853 | orchestrator | 2025-02-19 09:04:54.176858 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-02-19 09:04:54.176863 | orchestrator | Wednesday 19 February 2025 08:59:48 +0000 (0:00:39.533) 0:10:15.365 **** 2025-02-19 09:04:54.176867 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.176872 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.176877 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.176882 | orchestrator | 2025-02-19 09:04:54.176887 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-02-19 09:04:54.176892 | orchestrator | Wednesday 19 February 2025 08:59:49 +0000 (0:00:00.644) 0:10:16.009 **** 2025-02-19 09:04:54.176899 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.176904 | orchestrator | 2025-02-19 09:04:54.176909 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-02-19 09:04:54.176914 | orchestrator | Wednesday 19 February 2025 08:59:49 +0000 (0:00:00.657) 0:10:16.667 **** 2025-02-19 09:04:54.176919 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.176923 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.176928 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.176933 | orchestrator | 2025-02-19 09:04:54.176938 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-02-19 09:04:54.176943 | orchestrator | Wednesday 19 February 2025 08:59:50 +0000 (0:00:00.722) 0:10:17.390 **** 2025-02-19 09:04:54.176948 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.176952 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.176957 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.176965 | orchestrator | 2025-02-19 09:04:54.176970 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-02-19 09:04:54.176975 | orchestrator | Wednesday 19 February 2025 08:59:52 +0000 (0:00:01.816) 0:10:19.206 **** 2025-02-19 09:04:54.176980 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.176985 | orchestrator | 2025-02-19 09:04:54.176989 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-02-19 09:04:54.176994 | orchestrator | Wednesday 19 February 2025 08:59:53 +0000 (0:00:00.795) 0:10:20.001 **** 2025-02-19 09:04:54.176999 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.177004 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.177009 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.177013 | orchestrator | 2025-02-19 09:04:54.177018 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-02-19 09:04:54.177023 | orchestrator | Wednesday 19 February 2025 08:59:54 +0000 (0:00:01.612) 0:10:21.614 **** 2025-02-19 09:04:54.177028 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.177033 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.177038 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.177042 | orchestrator | 2025-02-19 09:04:54.177047 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-02-19 09:04:54.177052 | orchestrator | Wednesday 19 February 2025 08:59:56 +0000 (0:00:01.556) 0:10:23.170 **** 2025-02-19 09:04:54.177057 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.177062 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.177066 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.177071 | orchestrator | 2025-02-19 09:04:54.177076 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-02-19 09:04:54.177081 | orchestrator | Wednesday 19 February 2025 08:59:58 +0000 (0:00:01.967) 0:10:25.138 **** 2025-02-19 09:04:54.177085 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177090 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.177095 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.177100 | orchestrator | 2025-02-19 09:04:54.177105 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-02-19 09:04:54.177109 | orchestrator | Wednesday 19 February 2025 08:59:58 +0000 (0:00:00.439) 0:10:25.578 **** 2025-02-19 09:04:54.177114 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177119 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.177135 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.177140 | orchestrator | 2025-02-19 09:04:54.177145 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-02-19 09:04:54.177150 | orchestrator | Wednesday 19 February 2025 08:59:59 +0000 (0:00:00.781) 0:10:26.360 **** 2025-02-19 09:04:54.177154 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-19 09:04:54.177159 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-02-19 09:04:54.177164 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-02-19 09:04:54.177169 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-02-19 09:04:54.177174 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-02-19 09:04:54.177179 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-02-19 09:04:54.177183 | orchestrator | 2025-02-19 09:04:54.177188 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-02-19 09:04:54.177220 | orchestrator | Wednesday 19 February 2025 09:00:00 +0000 (0:00:01.272) 0:10:27.632 **** 2025-02-19 09:04:54.177225 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-02-19 09:04:54.177230 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-02-19 09:04:54.177235 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-02-19 09:04:54.177240 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-02-19 09:04:54.177245 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-02-19 09:04:54.177263 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-02-19 09:04:54.177268 | orchestrator | 2025-02-19 09:04:54.177285 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-02-19 09:04:54.177293 | orchestrator | Wednesday 19 February 2025 09:00:04 +0000 (0:00:03.569) 0:10:31.202 **** 2025-02-19 09:04:54.177298 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177303 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.177308 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-19 09:04:54.177313 | orchestrator | 2025-02-19 09:04:54.177318 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-02-19 09:04:54.177322 | orchestrator | Wednesday 19 February 2025 09:00:06 +0000 (0:00:02.426) 0:10:33.629 **** 2025-02-19 09:04:54.177327 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177332 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.177337 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-02-19 09:04:54.177342 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-19 09:04:54.177347 | orchestrator | 2025-02-19 09:04:54.177352 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-02-19 09:04:54.177357 | orchestrator | Wednesday 19 February 2025 09:00:19 +0000 (0:00:12.936) 0:10:46.566 **** 2025-02-19 09:04:54.177361 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177366 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.177371 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.177376 | orchestrator | 2025-02-19 09:04:54.177381 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-02-19 09:04:54.177386 | orchestrator | Wednesday 19 February 2025 09:00:20 +0000 (0:00:00.534) 0:10:47.100 **** 2025-02-19 09:04:54.177391 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177395 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.177400 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.177405 | orchestrator | 2025-02-19 09:04:54.177410 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-19 09:04:54.177415 | orchestrator | Wednesday 19 February 2025 09:00:21 +0000 (0:00:01.235) 0:10:48.336 **** 2025-02-19 09:04:54.177419 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.177424 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.177429 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.177434 | orchestrator | 2025-02-19 09:04:54.177439 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-02-19 09:04:54.177443 | orchestrator | Wednesday 19 February 2025 09:00:22 +0000 (0:00:00.789) 0:10:49.126 **** 2025-02-19 09:04:54.177448 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.177453 | orchestrator | 2025-02-19 09:04:54.177458 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-02-19 09:04:54.177463 | orchestrator | Wednesday 19 February 2025 09:00:23 +0000 (0:00:00.951) 0:10:50.078 **** 2025-02-19 09:04:54.177468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.177473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.177477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.177482 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177487 | orchestrator | 2025-02-19 09:04:54.177495 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-02-19 09:04:54.177500 | orchestrator | Wednesday 19 February 2025 09:00:23 +0000 (0:00:00.568) 0:10:50.646 **** 2025-02-19 09:04:54.177505 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177509 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.177514 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.177519 | orchestrator | 2025-02-19 09:04:54.177524 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-02-19 09:04:54.177529 | orchestrator | Wednesday 19 February 2025 09:00:24 +0000 (0:00:00.523) 0:10:51.170 **** 2025-02-19 09:04:54.177537 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177544 | orchestrator | 2025-02-19 09:04:54.177549 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-02-19 09:04:54.177554 | orchestrator | Wednesday 19 February 2025 09:00:24 +0000 (0:00:00.273) 0:10:51.444 **** 2025-02-19 09:04:54.177559 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177563 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.177568 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.177573 | orchestrator | 2025-02-19 09:04:54.177578 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-02-19 09:04:54.177583 | orchestrator | Wednesday 19 February 2025 09:00:25 +0000 (0:00:00.792) 0:10:52.236 **** 2025-02-19 09:04:54.177587 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177592 | orchestrator | 2025-02-19 09:04:54.177597 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-02-19 09:04:54.177602 | orchestrator | Wednesday 19 February 2025 09:00:25 +0000 (0:00:00.327) 0:10:52.564 **** 2025-02-19 09:04:54.177606 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177611 | orchestrator | 2025-02-19 09:04:54.177616 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-02-19 09:04:54.177621 | orchestrator | Wednesday 19 February 2025 09:00:26 +0000 (0:00:00.327) 0:10:52.891 **** 2025-02-19 09:04:54.177626 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177630 | orchestrator | 2025-02-19 09:04:54.177635 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-02-19 09:04:54.177640 | orchestrator | Wednesday 19 February 2025 09:00:26 +0000 (0:00:00.161) 0:10:53.052 **** 2025-02-19 09:04:54.177645 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177649 | orchestrator | 2025-02-19 09:04:54.177654 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-02-19 09:04:54.177659 | orchestrator | Wednesday 19 February 2025 09:00:26 +0000 (0:00:00.269) 0:10:53.321 **** 2025-02-19 09:04:54.177664 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177669 | orchestrator | 2025-02-19 09:04:54.177676 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-02-19 09:04:54.177692 | orchestrator | Wednesday 19 February 2025 09:00:26 +0000 (0:00:00.275) 0:10:53.597 **** 2025-02-19 09:04:54.177698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.177703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.177708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.177712 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177720 | orchestrator | 2025-02-19 09:04:54.177725 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-02-19 09:04:54.177730 | orchestrator | Wednesday 19 February 2025 09:00:27 +0000 (0:00:00.462) 0:10:54.060 **** 2025-02-19 09:04:54.177735 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177740 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.177745 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.177750 | orchestrator | 2025-02-19 09:04:54.177754 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-02-19 09:04:54.177759 | orchestrator | Wednesday 19 February 2025 09:00:27 +0000 (0:00:00.344) 0:10:54.405 **** 2025-02-19 09:04:54.177764 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177769 | orchestrator | 2025-02-19 09:04:54.177774 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-02-19 09:04:54.177779 | orchestrator | Wednesday 19 February 2025 09:00:27 +0000 (0:00:00.240) 0:10:54.645 **** 2025-02-19 09:04:54.177784 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177788 | orchestrator | 2025-02-19 09:04:54.177793 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-19 09:04:54.177798 | orchestrator | Wednesday 19 February 2025 09:00:28 +0000 (0:00:00.681) 0:10:55.327 **** 2025-02-19 09:04:54.177803 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.177811 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.177816 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.177821 | orchestrator | 2025-02-19 09:04:54.177826 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-02-19 09:04:54.177831 | orchestrator | 2025-02-19 09:04:54.177835 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-19 09:04:54.177840 | orchestrator | Wednesday 19 February 2025 09:00:32 +0000 (0:00:04.273) 0:10:59.600 **** 2025-02-19 09:04:54.177845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.177851 | orchestrator | 2025-02-19 09:04:54.177855 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-19 09:04:54.177860 | orchestrator | Wednesday 19 February 2025 09:00:34 +0000 (0:00:01.971) 0:11:01.572 **** 2025-02-19 09:04:54.177865 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.177870 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.177875 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.177880 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.177885 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.177889 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.177894 | orchestrator | 2025-02-19 09:04:54.177899 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-19 09:04:54.177904 | orchestrator | Wednesday 19 February 2025 09:00:35 +0000 (0:00:00.810) 0:11:02.382 **** 2025-02-19 09:04:54.177909 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.177914 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.177919 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.177924 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.177928 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.177933 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.177938 | orchestrator | 2025-02-19 09:04:54.177943 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-19 09:04:54.177948 | orchestrator | Wednesday 19 February 2025 09:00:37 +0000 (0:00:01.441) 0:11:03.824 **** 2025-02-19 09:04:54.177953 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.177958 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.177962 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.177967 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.177972 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.177977 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.177982 | orchestrator | 2025-02-19 09:04:54.177987 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-19 09:04:54.177991 | orchestrator | Wednesday 19 February 2025 09:00:38 +0000 (0:00:01.120) 0:11:04.945 **** 2025-02-19 09:04:54.177996 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178001 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178006 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178011 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.178043 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.178048 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.178053 | orchestrator | 2025-02-19 09:04:54.178058 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-19 09:04:54.178063 | orchestrator | Wednesday 19 February 2025 09:00:39 +0000 (0:00:01.490) 0:11:06.436 **** 2025-02-19 09:04:54.178068 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178072 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178077 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.178082 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178087 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.178092 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.178097 | orchestrator | 2025-02-19 09:04:54.178101 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-19 09:04:54.178109 | orchestrator | Wednesday 19 February 2025 09:00:40 +0000 (0:00:00.747) 0:11:07.184 **** 2025-02-19 09:04:54.178120 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178140 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178147 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178155 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178166 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178173 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178182 | orchestrator | 2025-02-19 09:04:54.178190 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-19 09:04:54.178215 | orchestrator | Wednesday 19 February 2025 09:00:41 +0000 (0:00:00.987) 0:11:08.171 **** 2025-02-19 09:04:54.178221 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178226 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178231 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178236 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178244 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178252 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178261 | orchestrator | 2025-02-19 09:04:54.178269 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-19 09:04:54.178277 | orchestrator | Wednesday 19 February 2025 09:00:42 +0000 (0:00:00.958) 0:11:09.130 **** 2025-02-19 09:04:54.178285 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178293 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178301 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178309 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178318 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178326 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178334 | orchestrator | 2025-02-19 09:04:54.178339 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-19 09:04:54.178344 | orchestrator | Wednesday 19 February 2025 09:00:43 +0000 (0:00:01.116) 0:11:10.246 **** 2025-02-19 09:04:54.178348 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178353 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178358 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178363 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178368 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178373 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178377 | orchestrator | 2025-02-19 09:04:54.178382 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-19 09:04:54.178390 | orchestrator | Wednesday 19 February 2025 09:00:44 +0000 (0:00:00.792) 0:11:11.038 **** 2025-02-19 09:04:54.178395 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178400 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178404 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178409 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178414 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178419 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178424 | orchestrator | 2025-02-19 09:04:54.178429 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-19 09:04:54.178433 | orchestrator | Wednesday 19 February 2025 09:00:45 +0000 (0:00:01.070) 0:11:12.109 **** 2025-02-19 09:04:54.178438 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.178443 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.178448 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.178453 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.178457 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.178462 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.178467 | orchestrator | 2025-02-19 09:04:54.178472 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-19 09:04:54.178477 | orchestrator | Wednesday 19 February 2025 09:00:46 +0000 (0:00:01.188) 0:11:13.298 **** 2025-02-19 09:04:54.178482 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178487 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178491 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178502 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178507 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178512 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178517 | orchestrator | 2025-02-19 09:04:54.178522 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-19 09:04:54.178526 | orchestrator | Wednesday 19 February 2025 09:00:47 +0000 (0:00:01.143) 0:11:14.441 **** 2025-02-19 09:04:54.178531 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.178536 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.178541 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.178546 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178550 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178555 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178560 | orchestrator | 2025-02-19 09:04:54.178565 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-19 09:04:54.178569 | orchestrator | Wednesday 19 February 2025 09:00:48 +0000 (0:00:00.977) 0:11:15.418 **** 2025-02-19 09:04:54.178574 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178579 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178584 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178588 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.178593 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.178598 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.178603 | orchestrator | 2025-02-19 09:04:54.178608 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-19 09:04:54.178612 | orchestrator | Wednesday 19 February 2025 09:00:49 +0000 (0:00:01.139) 0:11:16.558 **** 2025-02-19 09:04:54.178617 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178622 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178630 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178635 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.178640 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.178645 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.178649 | orchestrator | 2025-02-19 09:04:54.178654 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-19 09:04:54.178659 | orchestrator | Wednesday 19 February 2025 09:00:50 +0000 (0:00:00.774) 0:11:17.332 **** 2025-02-19 09:04:54.178664 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178669 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178674 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178678 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.178683 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.178688 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.178693 | orchestrator | 2025-02-19 09:04:54.178698 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-19 09:04:54.178702 | orchestrator | Wednesday 19 February 2025 09:00:51 +0000 (0:00:01.180) 0:11:18.512 **** 2025-02-19 09:04:54.178707 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178712 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178717 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178722 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178726 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178731 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178736 | orchestrator | 2025-02-19 09:04:54.178756 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-19 09:04:54.178761 | orchestrator | Wednesday 19 February 2025 09:00:52 +0000 (0:00:00.921) 0:11:19.434 **** 2025-02-19 09:04:54.178766 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178771 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178776 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178781 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178786 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178790 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178800 | orchestrator | 2025-02-19 09:04:54.178805 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-19 09:04:54.178810 | orchestrator | Wednesday 19 February 2025 09:00:54 +0000 (0:00:01.494) 0:11:20.928 **** 2025-02-19 09:04:54.178815 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.178820 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.178825 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.178830 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178834 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178839 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178844 | orchestrator | 2025-02-19 09:04:54.178849 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-19 09:04:54.178854 | orchestrator | Wednesday 19 February 2025 09:00:54 +0000 (0:00:00.810) 0:11:21.739 **** 2025-02-19 09:04:54.178859 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.178864 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.178868 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.178873 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.178878 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.178883 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.178888 | orchestrator | 2025-02-19 09:04:54.178893 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-19 09:04:54.178898 | orchestrator | Wednesday 19 February 2025 09:00:56 +0000 (0:00:01.238) 0:11:22.978 **** 2025-02-19 09:04:54.178902 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178907 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178912 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178917 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178922 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178927 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178932 | orchestrator | 2025-02-19 09:04:54.178936 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-19 09:04:54.178941 | orchestrator | Wednesday 19 February 2025 09:00:57 +0000 (0:00:01.042) 0:11:24.020 **** 2025-02-19 09:04:54.178946 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178951 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178956 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.178961 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.178965 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.178970 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.178975 | orchestrator | 2025-02-19 09:04:54.178980 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-19 09:04:54.178985 | orchestrator | Wednesday 19 February 2025 09:00:58 +0000 (0:00:01.341) 0:11:25.362 **** 2025-02-19 09:04:54.178989 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.178994 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.178999 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179004 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179011 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179016 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179020 | orchestrator | 2025-02-19 09:04:54.179025 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-19 09:04:54.179030 | orchestrator | Wednesday 19 February 2025 09:00:59 +0000 (0:00:00.755) 0:11:26.118 **** 2025-02-19 09:04:54.179035 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179040 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179045 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179049 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179054 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179059 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179064 | orchestrator | 2025-02-19 09:04:54.179069 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-19 09:04:54.179073 | orchestrator | Wednesday 19 February 2025 09:01:00 +0000 (0:00:01.220) 0:11:27.338 **** 2025-02-19 09:04:54.179081 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179086 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179091 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179096 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179100 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179105 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179110 | orchestrator | 2025-02-19 09:04:54.179115 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-19 09:04:54.179149 | orchestrator | Wednesday 19 February 2025 09:01:01 +0000 (0:00:00.875) 0:11:28.214 **** 2025-02-19 09:04:54.179156 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179160 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179165 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179170 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179175 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179180 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179184 | orchestrator | 2025-02-19 09:04:54.179189 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-19 09:04:54.179194 | orchestrator | Wednesday 19 February 2025 09:01:02 +0000 (0:00:01.535) 0:11:29.750 **** 2025-02-19 09:04:54.179199 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179204 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179209 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179213 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179218 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179223 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179228 | orchestrator | 2025-02-19 09:04:54.179233 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-19 09:04:54.179238 | orchestrator | Wednesday 19 February 2025 09:01:04 +0000 (0:00:01.076) 0:11:30.827 **** 2025-02-19 09:04:54.179243 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179248 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179253 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179271 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179277 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179282 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179286 | orchestrator | 2025-02-19 09:04:54.179291 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-19 09:04:54.179296 | orchestrator | Wednesday 19 February 2025 09:01:05 +0000 (0:00:01.221) 0:11:32.048 **** 2025-02-19 09:04:54.179301 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179306 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179311 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179316 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179321 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179326 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179331 | orchestrator | 2025-02-19 09:04:54.179336 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-19 09:04:54.179341 | orchestrator | Wednesday 19 February 2025 09:01:06 +0000 (0:00:00.983) 0:11:33.032 **** 2025-02-19 09:04:54.179345 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179350 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179355 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179360 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179365 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179370 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179374 | orchestrator | 2025-02-19 09:04:54.179379 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-19 09:04:54.179384 | orchestrator | Wednesday 19 February 2025 09:01:07 +0000 (0:00:01.360) 0:11:34.392 **** 2025-02-19 09:04:54.179389 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179394 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179404 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179409 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179414 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179419 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179423 | orchestrator | 2025-02-19 09:04:54.179428 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-19 09:04:54.179433 | orchestrator | Wednesday 19 February 2025 09:01:08 +0000 (0:00:00.835) 0:11:35.227 **** 2025-02-19 09:04:54.179438 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179452 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179461 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179469 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179477 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179485 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179493 | orchestrator | 2025-02-19 09:04:54.179501 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-19 09:04:54.179510 | orchestrator | Wednesday 19 February 2025 09:01:09 +0000 (0:00:01.240) 0:11:36.469 **** 2025-02-19 09:04:54.179518 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-19 09:04:54.179527 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-19 09:04:54.179535 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179540 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-19 09:04:54.179545 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-19 09:04:54.179550 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179555 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-19 09:04:54.179560 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-19 09:04:54.179565 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179569 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.179574 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.179579 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179584 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.179589 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.179594 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179598 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.179603 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.179608 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179613 | orchestrator | 2025-02-19 09:04:54.179618 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-19 09:04:54.179623 | orchestrator | Wednesday 19 February 2025 09:01:10 +0000 (0:00:00.850) 0:11:37.319 **** 2025-02-19 09:04:54.179628 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-19 09:04:54.179632 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-19 09:04:54.179637 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179642 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-19 09:04:54.179647 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-19 09:04:54.179652 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179657 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-19 09:04:54.179661 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-19 09:04:54.179666 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179671 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-19 09:04:54.179676 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-19 09:04:54.179681 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179686 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-19 09:04:54.179691 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-19 09:04:54.179695 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179704 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-19 09:04:54.179709 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-19 09:04:54.179714 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179719 | orchestrator | 2025-02-19 09:04:54.179724 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-19 09:04:54.179743 | orchestrator | Wednesday 19 February 2025 09:01:11 +0000 (0:00:01.381) 0:11:38.700 **** 2025-02-19 09:04:54.179748 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179753 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179758 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179763 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179768 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179773 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179778 | orchestrator | 2025-02-19 09:04:54.179783 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-19 09:04:54.179787 | orchestrator | Wednesday 19 February 2025 09:01:12 +0000 (0:00:00.846) 0:11:39.547 **** 2025-02-19 09:04:54.179792 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179797 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179802 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179807 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179812 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179817 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179821 | orchestrator | 2025-02-19 09:04:54.179826 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-19 09:04:54.179832 | orchestrator | Wednesday 19 February 2025 09:01:14 +0000 (0:00:01.376) 0:11:40.923 **** 2025-02-19 09:04:54.179837 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179841 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179846 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179851 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179856 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179861 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179866 | orchestrator | 2025-02-19 09:04:54.179870 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-19 09:04:54.179875 | orchestrator | Wednesday 19 February 2025 09:01:14 +0000 (0:00:00.767) 0:11:41.690 **** 2025-02-19 09:04:54.179880 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179885 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179890 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179895 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179900 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179904 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179909 | orchestrator | 2025-02-19 09:04:54.179914 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-19 09:04:54.179919 | orchestrator | Wednesday 19 February 2025 09:01:16 +0000 (0:00:01.171) 0:11:42.861 **** 2025-02-19 09:04:54.179924 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179928 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179933 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179938 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179943 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179948 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.179952 | orchestrator | 2025-02-19 09:04:54.179957 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-19 09:04:54.179962 | orchestrator | Wednesday 19 February 2025 09:01:16 +0000 (0:00:00.602) 0:11:43.464 **** 2025-02-19 09:04:54.179967 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.179972 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.179977 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.179981 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.179993 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.179997 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180002 | orchestrator | 2025-02-19 09:04:54.180007 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-19 09:04:54.180012 | orchestrator | Wednesday 19 February 2025 09:01:17 +0000 (0:00:00.939) 0:11:44.403 **** 2025-02-19 09:04:54.180017 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.180022 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.180027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.180031 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180036 | orchestrator | 2025-02-19 09:04:54.180041 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-19 09:04:54.180046 | orchestrator | Wednesday 19 February 2025 09:01:18 +0000 (0:00:00.464) 0:11:44.867 **** 2025-02-19 09:04:54.180051 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.180056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.180061 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.180066 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180071 | orchestrator | 2025-02-19 09:04:54.180075 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-19 09:04:54.180080 | orchestrator | Wednesday 19 February 2025 09:01:18 +0000 (0:00:00.543) 0:11:45.411 **** 2025-02-19 09:04:54.180085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.180090 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.180095 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.180100 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180105 | orchestrator | 2025-02-19 09:04:54.180110 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.180115 | orchestrator | Wednesday 19 February 2025 09:01:19 +0000 (0:00:00.482) 0:11:45.894 **** 2025-02-19 09:04:54.180119 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180135 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.180140 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.180145 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.180150 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.180155 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180160 | orchestrator | 2025-02-19 09:04:54.180167 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-19 09:04:54.180172 | orchestrator | Wednesday 19 February 2025 09:01:19 +0000 (0:00:00.909) 0:11:46.803 **** 2025-02-19 09:04:54.180177 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-19 09:04:54.180182 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180199 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-19 09:04:54.180205 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.180210 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-19 09:04:54.180214 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.180219 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.180224 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.180229 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.180234 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.180239 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.180244 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180249 | orchestrator | 2025-02-19 09:04:54.180254 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-19 09:04:54.180259 | orchestrator | Wednesday 19 February 2025 09:01:21 +0000 (0:00:01.021) 0:11:47.824 **** 2025-02-19 09:04:54.180264 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180269 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.180278 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.180283 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.180288 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.180292 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180297 | orchestrator | 2025-02-19 09:04:54.180302 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.180307 | orchestrator | Wednesday 19 February 2025 09:01:22 +0000 (0:00:01.186) 0:11:49.010 **** 2025-02-19 09:04:54.180312 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180317 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.180322 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.180327 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.180331 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.180336 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180341 | orchestrator | 2025-02-19 09:04:54.180346 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-19 09:04:54.180351 | orchestrator | Wednesday 19 February 2025 09:01:23 +0000 (0:00:01.244) 0:11:50.255 **** 2025-02-19 09:04:54.180356 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-19 09:04:54.180360 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180365 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-19 09:04:54.180370 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.180375 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-19 09:04:54.180380 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.180385 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.180389 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.180394 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.180399 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.180404 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.180409 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180414 | orchestrator | 2025-02-19 09:04:54.180419 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-19 09:04:54.180423 | orchestrator | Wednesday 19 February 2025 09:01:25 +0000 (0:00:02.076) 0:11:52.331 **** 2025-02-19 09:04:54.180428 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180433 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.180438 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.180443 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.180448 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.180453 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.180458 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.180463 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.180467 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180472 | orchestrator | 2025-02-19 09:04:54.180477 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-19 09:04:54.180482 | orchestrator | Wednesday 19 February 2025 09:01:26 +0000 (0:00:00.843) 0:11:53.174 **** 2025-02-19 09:04:54.180487 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-19 09:04:54.180492 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-19 09:04:54.180497 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-19 09:04:54.180502 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180506 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-19 09:04:54.180511 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-19 09:04:54.180516 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-19 09:04:54.180524 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.180529 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-19 09:04:54.180534 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-19 09:04:54.180539 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-19 09:04:54.180544 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.180549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.180553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.180558 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.180563 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.180571 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-19 09:04:54.180580 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-19 09:04:54.180588 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-19 09:04:54.180595 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.180606 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-19 09:04:54.180622 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-19 09:04:54.180631 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-19 09:04:54.180639 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180646 | orchestrator | 2025-02-19 09:04:54.180654 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-19 09:04:54.180662 | orchestrator | Wednesday 19 February 2025 09:01:28 +0000 (0:00:02.040) 0:11:55.215 **** 2025-02-19 09:04:54.180667 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180672 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.180677 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.180682 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.180687 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.180695 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180700 | orchestrator | 2025-02-19 09:04:54.180705 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-19 09:04:54.180710 | orchestrator | Wednesday 19 February 2025 09:01:29 +0000 (0:00:01.488) 0:11:56.703 **** 2025-02-19 09:04:54.180715 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180720 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.180725 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.180730 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-19 09:04:54.180735 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.180740 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-19 09:04:54.180745 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.180750 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-19 09:04:54.180755 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180759 | orchestrator | 2025-02-19 09:04:54.180764 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-19 09:04:54.180769 | orchestrator | Wednesday 19 February 2025 09:01:31 +0000 (0:00:01.486) 0:11:58.189 **** 2025-02-19 09:04:54.180774 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180779 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.180784 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.180789 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.180793 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.180798 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180803 | orchestrator | 2025-02-19 09:04:54.180808 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-19 09:04:54.180813 | orchestrator | Wednesday 19 February 2025 09:01:32 +0000 (0:00:01.330) 0:11:59.519 **** 2025-02-19 09:04:54.180818 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:04:54.180823 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:04:54.180828 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:04:54.180836 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.180841 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.180846 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.180851 | orchestrator | 2025-02-19 09:04:54.180856 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-02-19 09:04:54.180861 | orchestrator | Wednesday 19 February 2025 09:01:34 +0000 (0:00:01.498) 0:12:01.017 **** 2025-02-19 09:04:54.180865 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.180870 | orchestrator | 2025-02-19 09:04:54.180875 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-02-19 09:04:54.180880 | orchestrator | Wednesday 19 February 2025 09:01:38 +0000 (0:00:04.034) 0:12:05.052 **** 2025-02-19 09:04:54.180885 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.180890 | orchestrator | 2025-02-19 09:04:54.180895 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-02-19 09:04:54.180900 | orchestrator | Wednesday 19 February 2025 09:01:39 +0000 (0:00:01.618) 0:12:06.670 **** 2025-02-19 09:04:54.180905 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.180909 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.180914 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.180919 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.180924 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.180929 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.180934 | orchestrator | 2025-02-19 09:04:54.180939 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-02-19 09:04:54.180944 | orchestrator | Wednesday 19 February 2025 09:01:41 +0000 (0:00:01.532) 0:12:08.203 **** 2025-02-19 09:04:54.180948 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.180953 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.180958 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.180963 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.180968 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.180972 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.180977 | orchestrator | 2025-02-19 09:04:54.180982 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-02-19 09:04:54.180987 | orchestrator | Wednesday 19 February 2025 09:01:42 +0000 (0:00:01.597) 0:12:09.801 **** 2025-02-19 09:04:54.180992 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.180998 | orchestrator | 2025-02-19 09:04:54.181003 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-02-19 09:04:54.181010 | orchestrator | Wednesday 19 February 2025 09:01:45 +0000 (0:00:02.011) 0:12:11.813 **** 2025-02-19 09:04:54.181015 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.181020 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.181024 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.181029 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.181034 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.181039 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.181044 | orchestrator | 2025-02-19 09:04:54.181049 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-02-19 09:04:54.181054 | orchestrator | Wednesday 19 February 2025 09:01:46 +0000 (0:00:01.735) 0:12:13.548 **** 2025-02-19 09:04:54.181058 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.181063 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.181068 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.181073 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.181082 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.181087 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.181092 | orchestrator | 2025-02-19 09:04:54.181097 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-02-19 09:04:54.181102 | orchestrator | Wednesday 19 February 2025 09:01:52 +0000 (0:00:05.628) 0:12:19.177 **** 2025-02-19 09:04:54.181110 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.181115 | orchestrator | 2025-02-19 09:04:54.181120 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-02-19 09:04:54.181135 | orchestrator | Wednesday 19 February 2025 09:01:53 +0000 (0:00:01.335) 0:12:20.512 **** 2025-02-19 09:04:54.181139 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.181144 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.181149 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.181154 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.181159 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.181166 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.181171 | orchestrator | 2025-02-19 09:04:54.181176 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-02-19 09:04:54.181181 | orchestrator | Wednesday 19 February 2025 09:01:54 +0000 (0:00:01.062) 0:12:21.574 **** 2025-02-19 09:04:54.181186 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:04:54.181190 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:04:54.181195 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.181200 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:04:54.181205 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.181210 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.181215 | orchestrator | 2025-02-19 09:04:54.181219 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-02-19 09:04:54.181224 | orchestrator | Wednesday 19 February 2025 09:01:57 +0000 (0:00:02.621) 0:12:24.195 **** 2025-02-19 09:04:54.181229 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:04:54.181234 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:04:54.181239 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:04:54.181243 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.181248 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.181253 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.181258 | orchestrator | 2025-02-19 09:04:54.181263 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-02-19 09:04:54.181267 | orchestrator | 2025-02-19 09:04:54.181272 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-19 09:04:54.181277 | orchestrator | Wednesday 19 February 2025 09:02:00 +0000 (0:00:03.318) 0:12:27.513 **** 2025-02-19 09:04:54.181282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.181289 | orchestrator | 2025-02-19 09:04:54.181294 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-19 09:04:54.181299 | orchestrator | Wednesday 19 February 2025 09:02:01 +0000 (0:00:00.887) 0:12:28.401 **** 2025-02-19 09:04:54.181304 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181309 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181314 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181319 | orchestrator | 2025-02-19 09:04:54.181323 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-19 09:04:54.181328 | orchestrator | Wednesday 19 February 2025 09:02:02 +0000 (0:00:00.765) 0:12:29.166 **** 2025-02-19 09:04:54.181333 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.181338 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.181343 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.181347 | orchestrator | 2025-02-19 09:04:54.181352 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-19 09:04:54.181357 | orchestrator | Wednesday 19 February 2025 09:02:03 +0000 (0:00:00.879) 0:12:30.045 **** 2025-02-19 09:04:54.181362 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.181367 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.181372 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.181376 | orchestrator | 2025-02-19 09:04:54.181381 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-19 09:04:54.181389 | orchestrator | Wednesday 19 February 2025 09:02:04 +0000 (0:00:01.002) 0:12:31.048 **** 2025-02-19 09:04:54.181394 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.181399 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.181404 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.181409 | orchestrator | 2025-02-19 09:04:54.181413 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-19 09:04:54.181418 | orchestrator | Wednesday 19 February 2025 09:02:05 +0000 (0:00:00.862) 0:12:31.911 **** 2025-02-19 09:04:54.181423 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181428 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181433 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181438 | orchestrator | 2025-02-19 09:04:54.181443 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-19 09:04:54.181448 | orchestrator | Wednesday 19 February 2025 09:02:05 +0000 (0:00:00.808) 0:12:32.719 **** 2025-02-19 09:04:54.181452 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181457 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181462 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181467 | orchestrator | 2025-02-19 09:04:54.181472 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-19 09:04:54.181476 | orchestrator | Wednesday 19 February 2025 09:02:06 +0000 (0:00:00.393) 0:12:33.113 **** 2025-02-19 09:04:54.181481 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181486 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181491 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181496 | orchestrator | 2025-02-19 09:04:54.181501 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-19 09:04:54.181506 | orchestrator | Wednesday 19 February 2025 09:02:06 +0000 (0:00:00.532) 0:12:33.645 **** 2025-02-19 09:04:54.181510 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181515 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181520 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181525 | orchestrator | 2025-02-19 09:04:54.181532 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-19 09:04:54.181537 | orchestrator | Wednesday 19 February 2025 09:02:07 +0000 (0:00:00.418) 0:12:34.063 **** 2025-02-19 09:04:54.181542 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181547 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181552 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181556 | orchestrator | 2025-02-19 09:04:54.181564 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-19 09:04:54.181569 | orchestrator | Wednesday 19 February 2025 09:02:08 +0000 (0:00:00.939) 0:12:35.003 **** 2025-02-19 09:04:54.181574 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181579 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181583 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181588 | orchestrator | 2025-02-19 09:04:54.181593 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-19 09:04:54.181598 | orchestrator | Wednesday 19 February 2025 09:02:08 +0000 (0:00:00.586) 0:12:35.590 **** 2025-02-19 09:04:54.181603 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.181608 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.181612 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.181617 | orchestrator | 2025-02-19 09:04:54.181622 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-19 09:04:54.181627 | orchestrator | Wednesday 19 February 2025 09:02:09 +0000 (0:00:00.983) 0:12:36.574 **** 2025-02-19 09:04:54.181632 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181637 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181642 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181646 | orchestrator | 2025-02-19 09:04:54.181651 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-19 09:04:54.181656 | orchestrator | Wednesday 19 February 2025 09:02:10 +0000 (0:00:00.416) 0:12:36.990 **** 2025-02-19 09:04:54.181663 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181668 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181676 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181681 | orchestrator | 2025-02-19 09:04:54.181685 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-19 09:04:54.181690 | orchestrator | Wednesday 19 February 2025 09:02:11 +0000 (0:00:00.850) 0:12:37.841 **** 2025-02-19 09:04:54.181695 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.181700 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.181705 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.181710 | orchestrator | 2025-02-19 09:04:54.181715 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-19 09:04:54.181719 | orchestrator | Wednesday 19 February 2025 09:02:11 +0000 (0:00:00.499) 0:12:38.341 **** 2025-02-19 09:04:54.181724 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.181729 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.181734 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.181739 | orchestrator | 2025-02-19 09:04:54.181744 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-19 09:04:54.181749 | orchestrator | Wednesday 19 February 2025 09:02:11 +0000 (0:00:00.400) 0:12:38.741 **** 2025-02-19 09:04:54.181753 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.181758 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.181763 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.181768 | orchestrator | 2025-02-19 09:04:54.181773 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-19 09:04:54.181777 | orchestrator | Wednesday 19 February 2025 09:02:12 +0000 (0:00:00.456) 0:12:39.198 **** 2025-02-19 09:04:54.181782 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181787 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181792 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181797 | orchestrator | 2025-02-19 09:04:54.181802 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-19 09:04:54.181807 | orchestrator | Wednesday 19 February 2025 09:02:12 +0000 (0:00:00.543) 0:12:39.742 **** 2025-02-19 09:04:54.181812 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181816 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181821 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181826 | orchestrator | 2025-02-19 09:04:54.181831 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-19 09:04:54.181836 | orchestrator | Wednesday 19 February 2025 09:02:13 +0000 (0:00:00.392) 0:12:40.134 **** 2025-02-19 09:04:54.181841 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181845 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181850 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181855 | orchestrator | 2025-02-19 09:04:54.181860 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-19 09:04:54.181865 | orchestrator | Wednesday 19 February 2025 09:02:13 +0000 (0:00:00.354) 0:12:40.489 **** 2025-02-19 09:04:54.181870 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.181874 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.181879 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.181884 | orchestrator | 2025-02-19 09:04:54.181889 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-19 09:04:54.181894 | orchestrator | Wednesday 19 February 2025 09:02:14 +0000 (0:00:00.377) 0:12:40.866 **** 2025-02-19 09:04:54.181899 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181904 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181908 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181913 | orchestrator | 2025-02-19 09:04:54.181918 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-19 09:04:54.181923 | orchestrator | Wednesday 19 February 2025 09:02:14 +0000 (0:00:00.526) 0:12:41.392 **** 2025-02-19 09:04:54.181928 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181935 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181940 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181945 | orchestrator | 2025-02-19 09:04:54.181950 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-19 09:04:54.181955 | orchestrator | Wednesday 19 February 2025 09:02:14 +0000 (0:00:00.318) 0:12:41.711 **** 2025-02-19 09:04:54.181960 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181965 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.181969 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.181974 | orchestrator | 2025-02-19 09:04:54.181979 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-19 09:04:54.181986 | orchestrator | Wednesday 19 February 2025 09:02:15 +0000 (0:00:00.379) 0:12:42.090 **** 2025-02-19 09:04:54.181991 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.181996 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182000 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182005 | orchestrator | 2025-02-19 09:04:54.182010 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-19 09:04:54.182031 | orchestrator | Wednesday 19 February 2025 09:02:15 +0000 (0:00:00.289) 0:12:42.380 **** 2025-02-19 09:04:54.182036 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182041 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182045 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182050 | orchestrator | 2025-02-19 09:04:54.182058 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-19 09:04:54.182063 | orchestrator | Wednesday 19 February 2025 09:02:16 +0000 (0:00:00.521) 0:12:42.901 **** 2025-02-19 09:04:54.182067 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182072 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182077 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182082 | orchestrator | 2025-02-19 09:04:54.182087 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-19 09:04:54.182092 | orchestrator | Wednesday 19 February 2025 09:02:16 +0000 (0:00:00.340) 0:12:43.242 **** 2025-02-19 09:04:54.182096 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182101 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182106 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182111 | orchestrator | 2025-02-19 09:04:54.182116 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-19 09:04:54.182121 | orchestrator | Wednesday 19 February 2025 09:02:16 +0000 (0:00:00.352) 0:12:43.594 **** 2025-02-19 09:04:54.182135 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182140 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182145 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182150 | orchestrator | 2025-02-19 09:04:54.182155 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-19 09:04:54.182160 | orchestrator | Wednesday 19 February 2025 09:02:17 +0000 (0:00:00.333) 0:12:43.928 **** 2025-02-19 09:04:54.182165 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182169 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182174 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182179 | orchestrator | 2025-02-19 09:04:54.182184 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-19 09:04:54.182189 | orchestrator | Wednesday 19 February 2025 09:02:17 +0000 (0:00:00.630) 0:12:44.558 **** 2025-02-19 09:04:54.182194 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182199 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182206 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182211 | orchestrator | 2025-02-19 09:04:54.182215 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-19 09:04:54.182220 | orchestrator | Wednesday 19 February 2025 09:02:18 +0000 (0:00:00.643) 0:12:45.202 **** 2025-02-19 09:04:54.182228 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182233 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182238 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182243 | orchestrator | 2025-02-19 09:04:54.182248 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-19 09:04:54.182253 | orchestrator | Wednesday 19 February 2025 09:02:18 +0000 (0:00:00.542) 0:12:45.745 **** 2025-02-19 09:04:54.182258 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182262 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182267 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182272 | orchestrator | 2025-02-19 09:04:54.182277 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-19 09:04:54.182282 | orchestrator | Wednesday 19 February 2025 09:02:19 +0000 (0:00:00.421) 0:12:46.166 **** 2025-02-19 09:04:54.182287 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.182292 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.182297 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182301 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.182306 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.182311 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182316 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.182321 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.182326 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182330 | orchestrator | 2025-02-19 09:04:54.182335 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-19 09:04:54.182340 | orchestrator | Wednesday 19 February 2025 09:02:20 +0000 (0:00:00.640) 0:12:46.806 **** 2025-02-19 09:04:54.182345 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-19 09:04:54.182350 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-19 09:04:54.182355 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182360 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-19 09:04:54.182365 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-19 09:04:54.182370 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182375 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-19 09:04:54.182379 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-19 09:04:54.182384 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182389 | orchestrator | 2025-02-19 09:04:54.182394 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-19 09:04:54.182399 | orchestrator | Wednesday 19 February 2025 09:02:20 +0000 (0:00:00.410) 0:12:47.217 **** 2025-02-19 09:04:54.182404 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182408 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182413 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182418 | orchestrator | 2025-02-19 09:04:54.182425 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-19 09:04:54.182430 | orchestrator | Wednesday 19 February 2025 09:02:20 +0000 (0:00:00.336) 0:12:47.553 **** 2025-02-19 09:04:54.182435 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182440 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182445 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182450 | orchestrator | 2025-02-19 09:04:54.182455 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-19 09:04:54.182460 | orchestrator | Wednesday 19 February 2025 09:02:21 +0000 (0:00:00.328) 0:12:47.882 **** 2025-02-19 09:04:54.182464 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182469 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182474 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182479 | orchestrator | 2025-02-19 09:04:54.182484 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-19 09:04:54.182493 | orchestrator | Wednesday 19 February 2025 09:02:21 +0000 (0:00:00.640) 0:12:48.523 **** 2025-02-19 09:04:54.182498 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182503 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182508 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182513 | orchestrator | 2025-02-19 09:04:54.182518 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-19 09:04:54.182522 | orchestrator | Wednesday 19 February 2025 09:02:22 +0000 (0:00:00.423) 0:12:48.946 **** 2025-02-19 09:04:54.182527 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182532 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182537 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182541 | orchestrator | 2025-02-19 09:04:54.182546 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-19 09:04:54.182551 | orchestrator | Wednesday 19 February 2025 09:02:22 +0000 (0:00:00.369) 0:12:49.315 **** 2025-02-19 09:04:54.182556 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182561 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182566 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182570 | orchestrator | 2025-02-19 09:04:54.182575 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-19 09:04:54.182580 | orchestrator | Wednesday 19 February 2025 09:02:22 +0000 (0:00:00.315) 0:12:49.630 **** 2025-02-19 09:04:54.182585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.182590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.182595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.182600 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182604 | orchestrator | 2025-02-19 09:04:54.182609 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-19 09:04:54.182614 | orchestrator | Wednesday 19 February 2025 09:02:23 +0000 (0:00:00.670) 0:12:50.301 **** 2025-02-19 09:04:54.182619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.182624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.182629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.182633 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182638 | orchestrator | 2025-02-19 09:04:54.182643 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-19 09:04:54.182648 | orchestrator | Wednesday 19 February 2025 09:02:24 +0000 (0:00:00.779) 0:12:51.081 **** 2025-02-19 09:04:54.182653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.182658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.182663 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.182668 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182673 | orchestrator | 2025-02-19 09:04:54.182678 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.182683 | orchestrator | Wednesday 19 February 2025 09:02:24 +0000 (0:00:00.448) 0:12:51.529 **** 2025-02-19 09:04:54.182687 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182692 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182697 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182702 | orchestrator | 2025-02-19 09:04:54.182707 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-19 09:04:54.182711 | orchestrator | Wednesday 19 February 2025 09:02:25 +0000 (0:00:00.339) 0:12:51.869 **** 2025-02-19 09:04:54.182716 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.182721 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182726 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.182731 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182736 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.182743 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182748 | orchestrator | 2025-02-19 09:04:54.182753 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-19 09:04:54.182758 | orchestrator | Wednesday 19 February 2025 09:02:25 +0000 (0:00:00.490) 0:12:52.359 **** 2025-02-19 09:04:54.182763 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182768 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182773 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182778 | orchestrator | 2025-02-19 09:04:54.182782 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.182787 | orchestrator | Wednesday 19 February 2025 09:02:25 +0000 (0:00:00.319) 0:12:52.679 **** 2025-02-19 09:04:54.182792 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182797 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182801 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182806 | orchestrator | 2025-02-19 09:04:54.182815 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-19 09:04:54.182820 | orchestrator | Wednesday 19 February 2025 09:02:26 +0000 (0:00:00.626) 0:12:53.306 **** 2025-02-19 09:04:54.182825 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.182829 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182837 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.182842 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182847 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.182852 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182856 | orchestrator | 2025-02-19 09:04:54.182861 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-19 09:04:54.182866 | orchestrator | Wednesday 19 February 2025 09:02:27 +0000 (0:00:00.656) 0:12:53.963 **** 2025-02-19 09:04:54.182871 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.182878 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182884 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.182888 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182893 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.182898 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182903 | orchestrator | 2025-02-19 09:04:54.182908 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-19 09:04:54.182913 | orchestrator | Wednesday 19 February 2025 09:02:27 +0000 (0:00:00.673) 0:12:54.637 **** 2025-02-19 09:04:54.182918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.182923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.182928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.182933 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-19 09:04:54.182938 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-19 09:04:54.182943 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-19 09:04:54.182948 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.182953 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.182958 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-19 09:04:54.182963 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-19 09:04:54.182970 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-19 09:04:54.182975 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.182980 | orchestrator | 2025-02-19 09:04:54.182985 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-19 09:04:54.182992 | orchestrator | Wednesday 19 February 2025 09:02:29 +0000 (0:00:01.530) 0:12:56.167 **** 2025-02-19 09:04:54.182997 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.183002 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.183007 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.183012 | orchestrator | 2025-02-19 09:04:54.183016 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-19 09:04:54.183021 | orchestrator | Wednesday 19 February 2025 09:02:30 +0000 (0:00:00.839) 0:12:57.007 **** 2025-02-19 09:04:54.183026 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-19 09:04:54.183031 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.183036 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-19 09:04:54.183040 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.183045 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-19 09:04:54.183050 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.183055 | orchestrator | 2025-02-19 09:04:54.183060 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-19 09:04:54.183064 | orchestrator | Wednesday 19 February 2025 09:02:31 +0000 (0:00:01.261) 0:12:58.268 **** 2025-02-19 09:04:54.183069 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.183074 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.183079 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.183084 | orchestrator | 2025-02-19 09:04:54.183089 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-19 09:04:54.183093 | orchestrator | Wednesday 19 February 2025 09:02:32 +0000 (0:00:00.865) 0:12:59.133 **** 2025-02-19 09:04:54.183098 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.183103 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.183108 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.183113 | orchestrator | 2025-02-19 09:04:54.183118 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-02-19 09:04:54.183148 | orchestrator | Wednesday 19 February 2025 09:02:33 +0000 (0:00:01.138) 0:13:00.272 **** 2025-02-19 09:04:54.183154 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.183158 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.183163 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-02-19 09:04:54.183168 | orchestrator | 2025-02-19 09:04:54.183173 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-02-19 09:04:54.183178 | orchestrator | Wednesday 19 February 2025 09:02:34 +0000 (0:00:00.641) 0:13:00.914 **** 2025-02-19 09:04:54.183183 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-19 09:04:54.183187 | orchestrator | 2025-02-19 09:04:54.183192 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-02-19 09:04:54.183197 | orchestrator | Wednesday 19 February 2025 09:02:36 +0000 (0:00:02.262) 0:13:03.177 **** 2025-02-19 09:04:54.183203 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-02-19 09:04:54.183213 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.183218 | orchestrator | 2025-02-19 09:04:54.183223 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-02-19 09:04:54.183230 | orchestrator | Wednesday 19 February 2025 09:02:37 +0000 (0:00:00.983) 0:13:04.160 **** 2025-02-19 09:04:54.183237 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-19 09:04:54.183243 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-19 09:04:54.183251 | orchestrator | 2025-02-19 09:04:54.183256 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-02-19 09:04:54.183261 | orchestrator | Wednesday 19 February 2025 09:02:44 +0000 (0:00:07.608) 0:13:11.769 **** 2025-02-19 09:04:54.183266 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-19 09:04:54.183271 | orchestrator | 2025-02-19 09:04:54.183278 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-02-19 09:04:54.183283 | orchestrator | Wednesday 19 February 2025 09:02:48 +0000 (0:00:03.581) 0:13:15.351 **** 2025-02-19 09:04:54.183288 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.183293 | orchestrator | 2025-02-19 09:04:54.183297 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-02-19 09:04:54.183302 | orchestrator | Wednesday 19 February 2025 09:02:49 +0000 (0:00:00.868) 0:13:16.220 **** 2025-02-19 09:04:54.183307 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-19 09:04:54.183312 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-19 09:04:54.183317 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-19 09:04:54.183321 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-02-19 09:04:54.183326 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-02-19 09:04:54.183331 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-02-19 09:04:54.183336 | orchestrator | 2025-02-19 09:04:54.183341 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-02-19 09:04:54.183346 | orchestrator | Wednesday 19 February 2025 09:02:50 +0000 (0:00:01.262) 0:13:17.482 **** 2025-02-19 09:04:54.183350 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:04:54.183355 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-19 09:04:54.183360 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-19 09:04:54.183365 | orchestrator | 2025-02-19 09:04:54.183370 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-02-19 09:04:54.183375 | orchestrator | Wednesday 19 February 2025 09:02:52 +0000 (0:00:01.950) 0:13:19.432 **** 2025-02-19 09:04:54.183379 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-19 09:04:54.183384 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-19 09:04:54.183389 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.183394 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-19 09:04:54.183399 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-19 09:04:54.183404 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.183408 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-19 09:04:54.183413 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-19 09:04:54.183418 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.183423 | orchestrator | 2025-02-19 09:04:54.183427 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-02-19 09:04:54.183432 | orchestrator | Wednesday 19 February 2025 09:02:53 +0000 (0:00:01.075) 0:13:20.507 **** 2025-02-19 09:04:54.183437 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.183442 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.183446 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.183451 | orchestrator | 2025-02-19 09:04:54.183456 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-02-19 09:04:54.183461 | orchestrator | Wednesday 19 February 2025 09:02:54 +0000 (0:00:00.343) 0:13:20.850 **** 2025-02-19 09:04:54.183466 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.183473 | orchestrator | 2025-02-19 09:04:54.183478 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-02-19 09:04:54.183483 | orchestrator | Wednesday 19 February 2025 09:02:54 +0000 (0:00:00.917) 0:13:21.768 **** 2025-02-19 09:04:54.183488 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.183493 | orchestrator | 2025-02-19 09:04:54.183497 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-02-19 09:04:54.183502 | orchestrator | Wednesday 19 February 2025 09:02:55 +0000 (0:00:00.737) 0:13:22.506 **** 2025-02-19 09:04:54.183507 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.183512 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.183516 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.183521 | orchestrator | 2025-02-19 09:04:54.183526 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-02-19 09:04:54.183531 | orchestrator | Wednesday 19 February 2025 09:02:57 +0000 (0:00:01.880) 0:13:24.386 **** 2025-02-19 09:04:54.183536 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.183540 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.183545 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.183550 | orchestrator | 2025-02-19 09:04:54.183557 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-02-19 09:04:54.183562 | orchestrator | Wednesday 19 February 2025 09:02:58 +0000 (0:00:01.377) 0:13:25.764 **** 2025-02-19 09:04:54.183567 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.183572 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.183577 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.183581 | orchestrator | 2025-02-19 09:04:54.183586 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-02-19 09:04:54.183591 | orchestrator | Wednesday 19 February 2025 09:03:00 +0000 (0:00:01.977) 0:13:27.741 **** 2025-02-19 09:04:54.183596 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.183601 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.183606 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.183611 | orchestrator | 2025-02-19 09:04:54.183615 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-02-19 09:04:54.183620 | orchestrator | Wednesday 19 February 2025 09:03:03 +0000 (0:00:02.345) 0:13:30.087 **** 2025-02-19 09:04:54.183625 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-02-19 09:04:54.183630 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-02-19 09:04:54.183635 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-02-19 09:04:54.183640 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.183644 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.183649 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.183654 | orchestrator | 2025-02-19 09:04:54.183659 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-19 09:04:54.183666 | orchestrator | Wednesday 19 February 2025 09:03:20 +0000 (0:00:17.251) 0:13:47.339 **** 2025-02-19 09:04:54.183671 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.183676 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.183681 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.183685 | orchestrator | 2025-02-19 09:04:54.183690 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-19 09:04:54.183695 | orchestrator | Wednesday 19 February 2025 09:03:21 +0000 (0:00:00.727) 0:13:48.066 **** 2025-02-19 09:04:54.183700 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.183704 | orchestrator | 2025-02-19 09:04:54.183709 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-02-19 09:04:54.183714 | orchestrator | Wednesday 19 February 2025 09:03:22 +0000 (0:00:00.954) 0:13:49.021 **** 2025-02-19 09:04:54.183722 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.183726 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.183731 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.183736 | orchestrator | 2025-02-19 09:04:54.183741 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-02-19 09:04:54.183746 | orchestrator | Wednesday 19 February 2025 09:03:22 +0000 (0:00:00.377) 0:13:49.398 **** 2025-02-19 09:04:54.183750 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.183755 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.183760 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.183765 | orchestrator | 2025-02-19 09:04:54.183770 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-02-19 09:04:54.183774 | orchestrator | Wednesday 19 February 2025 09:03:23 +0000 (0:00:01.261) 0:13:50.659 **** 2025-02-19 09:04:54.183779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.183784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.183789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.183794 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.183801 | orchestrator | 2025-02-19 09:04:54.183806 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-02-19 09:04:54.183811 | orchestrator | Wednesday 19 February 2025 09:03:24 +0000 (0:00:01.031) 0:13:51.691 **** 2025-02-19 09:04:54.183815 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.183823 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.183827 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.183832 | orchestrator | 2025-02-19 09:04:54.183837 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-19 09:04:54.183842 | orchestrator | Wednesday 19 February 2025 09:03:25 +0000 (0:00:00.683) 0:13:52.374 **** 2025-02-19 09:04:54.183847 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.183851 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.183856 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.183861 | orchestrator | 2025-02-19 09:04:54.183865 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-02-19 09:04:54.183870 | orchestrator | 2025-02-19 09:04:54.183875 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-19 09:04:54.183880 | orchestrator | Wednesday 19 February 2025 09:03:28 +0000 (0:00:02.453) 0:13:54.827 **** 2025-02-19 09:04:54.183885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.183892 | orchestrator | 2025-02-19 09:04:54.183897 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-19 09:04:54.183902 | orchestrator | Wednesday 19 February 2025 09:03:29 +0000 (0:00:00.997) 0:13:55.825 **** 2025-02-19 09:04:54.183906 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.183911 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.183916 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.183921 | orchestrator | 2025-02-19 09:04:54.183925 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-19 09:04:54.183930 | orchestrator | Wednesday 19 February 2025 09:03:29 +0000 (0:00:00.503) 0:13:56.329 **** 2025-02-19 09:04:54.183935 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.183940 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.183945 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.183949 | orchestrator | 2025-02-19 09:04:54.183956 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-19 09:04:54.183961 | orchestrator | Wednesday 19 February 2025 09:03:30 +0000 (0:00:00.786) 0:13:57.115 **** 2025-02-19 09:04:54.183966 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.183971 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.183976 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.183980 | orchestrator | 2025-02-19 09:04:54.183985 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-19 09:04:54.183993 | orchestrator | Wednesday 19 February 2025 09:03:31 +0000 (0:00:00.769) 0:13:57.885 **** 2025-02-19 09:04:54.183998 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.184003 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.184008 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.184012 | orchestrator | 2025-02-19 09:04:54.184017 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-19 09:04:54.184022 | orchestrator | Wednesday 19 February 2025 09:03:32 +0000 (0:00:01.217) 0:13:59.103 **** 2025-02-19 09:04:54.184027 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184031 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184036 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184041 | orchestrator | 2025-02-19 09:04:54.184046 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-19 09:04:54.184051 | orchestrator | Wednesday 19 February 2025 09:03:32 +0000 (0:00:00.375) 0:13:59.478 **** 2025-02-19 09:04:54.184056 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184060 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184065 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184070 | orchestrator | 2025-02-19 09:04:54.184075 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-19 09:04:54.184080 | orchestrator | Wednesday 19 February 2025 09:03:33 +0000 (0:00:00.349) 0:13:59.827 **** 2025-02-19 09:04:54.184085 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184089 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184094 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184099 | orchestrator | 2025-02-19 09:04:54.184104 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-19 09:04:54.184109 | orchestrator | Wednesday 19 February 2025 09:03:33 +0000 (0:00:00.421) 0:14:00.249 **** 2025-02-19 09:04:54.184113 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184118 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184132 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184137 | orchestrator | 2025-02-19 09:04:54.184145 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-19 09:04:54.184150 | orchestrator | Wednesday 19 February 2025 09:03:34 +0000 (0:00:00.761) 0:14:01.010 **** 2025-02-19 09:04:54.184155 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184160 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184165 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184169 | orchestrator | 2025-02-19 09:04:54.184174 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-19 09:04:54.184179 | orchestrator | Wednesday 19 February 2025 09:03:34 +0000 (0:00:00.369) 0:14:01.379 **** 2025-02-19 09:04:54.184184 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184189 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184194 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184198 | orchestrator | 2025-02-19 09:04:54.184203 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-19 09:04:54.184208 | orchestrator | Wednesday 19 February 2025 09:03:34 +0000 (0:00:00.362) 0:14:01.742 **** 2025-02-19 09:04:54.184213 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.184217 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.184222 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.184227 | orchestrator | 2025-02-19 09:04:54.184232 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-19 09:04:54.184237 | orchestrator | Wednesday 19 February 2025 09:03:35 +0000 (0:00:00.843) 0:14:02.585 **** 2025-02-19 09:04:54.184241 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184246 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184251 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184256 | orchestrator | 2025-02-19 09:04:54.184261 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-19 09:04:54.184265 | orchestrator | Wednesday 19 February 2025 09:03:36 +0000 (0:00:00.752) 0:14:03.337 **** 2025-02-19 09:04:54.184273 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184278 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184283 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184287 | orchestrator | 2025-02-19 09:04:54.184292 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-19 09:04:54.184297 | orchestrator | Wednesday 19 February 2025 09:03:36 +0000 (0:00:00.359) 0:14:03.697 **** 2025-02-19 09:04:54.184302 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.184307 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.184311 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.184316 | orchestrator | 2025-02-19 09:04:54.184321 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-19 09:04:54.184326 | orchestrator | Wednesday 19 February 2025 09:03:37 +0000 (0:00:00.456) 0:14:04.153 **** 2025-02-19 09:04:54.184330 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.184335 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.184340 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.184345 | orchestrator | 2025-02-19 09:04:54.184349 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-19 09:04:54.184354 | orchestrator | Wednesday 19 February 2025 09:03:37 +0000 (0:00:00.472) 0:14:04.625 **** 2025-02-19 09:04:54.184359 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.184366 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.184371 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.184376 | orchestrator | 2025-02-19 09:04:54.184381 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-19 09:04:54.184386 | orchestrator | Wednesday 19 February 2025 09:03:38 +0000 (0:00:00.835) 0:14:05.461 **** 2025-02-19 09:04:54.184390 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184395 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184400 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184405 | orchestrator | 2025-02-19 09:04:54.184410 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-19 09:04:54.184417 | orchestrator | Wednesday 19 February 2025 09:03:39 +0000 (0:00:00.415) 0:14:05.877 **** 2025-02-19 09:04:54.184422 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184426 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184431 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184436 | orchestrator | 2025-02-19 09:04:54.184441 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-19 09:04:54.184445 | orchestrator | Wednesday 19 February 2025 09:03:39 +0000 (0:00:00.348) 0:14:06.225 **** 2025-02-19 09:04:54.184450 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184455 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184460 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184465 | orchestrator | 2025-02-19 09:04:54.184469 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-19 09:04:54.184474 | orchestrator | Wednesday 19 February 2025 09:03:39 +0000 (0:00:00.334) 0:14:06.560 **** 2025-02-19 09:04:54.184479 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.184484 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.184489 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.184493 | orchestrator | 2025-02-19 09:04:54.184498 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-19 09:04:54.184503 | orchestrator | Wednesday 19 February 2025 09:03:40 +0000 (0:00:00.780) 0:14:07.340 **** 2025-02-19 09:04:54.184508 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184513 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184517 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184522 | orchestrator | 2025-02-19 09:04:54.184527 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-19 09:04:54.184543 | orchestrator | Wednesday 19 February 2025 09:03:40 +0000 (0:00:00.415) 0:14:07.756 **** 2025-02-19 09:04:54.184548 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184564 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184569 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184574 | orchestrator | 2025-02-19 09:04:54.184578 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-19 09:04:54.184583 | orchestrator | Wednesday 19 February 2025 09:03:41 +0000 (0:00:00.376) 0:14:08.133 **** 2025-02-19 09:04:54.184588 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184593 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184598 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184602 | orchestrator | 2025-02-19 09:04:54.184607 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-19 09:04:54.184612 | orchestrator | Wednesday 19 February 2025 09:03:41 +0000 (0:00:00.424) 0:14:08.558 **** 2025-02-19 09:04:54.184617 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184622 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184626 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184631 | orchestrator | 2025-02-19 09:04:54.184636 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-19 09:04:54.184641 | orchestrator | Wednesday 19 February 2025 09:03:42 +0000 (0:00:00.875) 0:14:09.433 **** 2025-02-19 09:04:54.184646 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184650 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184655 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184660 | orchestrator | 2025-02-19 09:04:54.184665 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-19 09:04:54.184670 | orchestrator | Wednesday 19 February 2025 09:03:43 +0000 (0:00:00.421) 0:14:09.855 **** 2025-02-19 09:04:54.184674 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184679 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184684 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184688 | orchestrator | 2025-02-19 09:04:54.184693 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-19 09:04:54.184698 | orchestrator | Wednesday 19 February 2025 09:03:43 +0000 (0:00:00.362) 0:14:10.218 **** 2025-02-19 09:04:54.184703 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184708 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184712 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184717 | orchestrator | 2025-02-19 09:04:54.184722 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-19 09:04:54.184727 | orchestrator | Wednesday 19 February 2025 09:03:43 +0000 (0:00:00.400) 0:14:10.618 **** 2025-02-19 09:04:54.184732 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184736 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184741 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184746 | orchestrator | 2025-02-19 09:04:54.184751 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-19 09:04:54.184755 | orchestrator | Wednesday 19 February 2025 09:03:44 +0000 (0:00:00.704) 0:14:11.323 **** 2025-02-19 09:04:54.184763 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184768 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184772 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184777 | orchestrator | 2025-02-19 09:04:54.184782 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-19 09:04:54.184787 | orchestrator | Wednesday 19 February 2025 09:03:44 +0000 (0:00:00.392) 0:14:11.715 **** 2025-02-19 09:04:54.184792 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184796 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184801 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184806 | orchestrator | 2025-02-19 09:04:54.184810 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-19 09:04:54.184815 | orchestrator | Wednesday 19 February 2025 09:03:45 +0000 (0:00:00.366) 0:14:12.082 **** 2025-02-19 09:04:54.184825 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184830 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184835 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184840 | orchestrator | 2025-02-19 09:04:54.184844 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-19 09:04:54.184849 | orchestrator | Wednesday 19 February 2025 09:03:45 +0000 (0:00:00.376) 0:14:12.459 **** 2025-02-19 09:04:54.184854 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184859 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184866 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184871 | orchestrator | 2025-02-19 09:04:54.184876 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-19 09:04:54.184880 | orchestrator | Wednesday 19 February 2025 09:03:46 +0000 (0:00:00.724) 0:14:13.183 **** 2025-02-19 09:04:54.184885 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.184890 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-19 09:04:54.184895 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184902 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.184907 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-19 09:04:54.184912 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184917 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.184921 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-19 09:04:54.184926 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184931 | orchestrator | 2025-02-19 09:04:54.184936 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-19 09:04:54.184941 | orchestrator | Wednesday 19 February 2025 09:03:46 +0000 (0:00:00.454) 0:14:13.638 **** 2025-02-19 09:04:54.184945 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-19 09:04:54.184950 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-19 09:04:54.184955 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.184960 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-19 09:04:54.184965 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-19 09:04:54.184970 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.184974 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-19 09:04:54.184979 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-19 09:04:54.184984 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.184989 | orchestrator | 2025-02-19 09:04:54.184994 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-19 09:04:54.184998 | orchestrator | Wednesday 19 February 2025 09:03:47 +0000 (0:00:00.434) 0:14:14.073 **** 2025-02-19 09:04:54.185003 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185008 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185013 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185017 | orchestrator | 2025-02-19 09:04:54.185022 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-19 09:04:54.185027 | orchestrator | Wednesday 19 February 2025 09:03:47 +0000 (0:00:00.375) 0:14:14.448 **** 2025-02-19 09:04:54.185032 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185036 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185041 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185046 | orchestrator | 2025-02-19 09:04:54.185051 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-19 09:04:54.185058 | orchestrator | Wednesday 19 February 2025 09:03:48 +0000 (0:00:00.703) 0:14:15.152 **** 2025-02-19 09:04:54.185063 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185068 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185073 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185078 | orchestrator | 2025-02-19 09:04:54.185083 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-19 09:04:54.185090 | orchestrator | Wednesday 19 February 2025 09:03:48 +0000 (0:00:00.417) 0:14:15.569 **** 2025-02-19 09:04:54.185095 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185100 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185105 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185109 | orchestrator | 2025-02-19 09:04:54.185114 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-19 09:04:54.185119 | orchestrator | Wednesday 19 February 2025 09:03:49 +0000 (0:00:00.361) 0:14:15.931 **** 2025-02-19 09:04:54.185133 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185138 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185143 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185147 | orchestrator | 2025-02-19 09:04:54.185152 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-19 09:04:54.185157 | orchestrator | Wednesday 19 February 2025 09:03:49 +0000 (0:00:00.387) 0:14:16.318 **** 2025-02-19 09:04:54.185162 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185167 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185171 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185176 | orchestrator | 2025-02-19 09:04:54.185181 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-19 09:04:54.185185 | orchestrator | Wednesday 19 February 2025 09:03:50 +0000 (0:00:00.685) 0:14:17.004 **** 2025-02-19 09:04:54.185190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.185195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.185200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.185204 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185209 | orchestrator | 2025-02-19 09:04:54.185214 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-19 09:04:54.185219 | orchestrator | Wednesday 19 February 2025 09:03:50 +0000 (0:00:00.528) 0:14:17.533 **** 2025-02-19 09:04:54.185223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.185231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.185235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.185240 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185245 | orchestrator | 2025-02-19 09:04:54.185250 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-19 09:04:54.185255 | orchestrator | Wednesday 19 February 2025 09:03:51 +0000 (0:00:00.467) 0:14:18.000 **** 2025-02-19 09:04:54.185260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.185264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.185272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.185277 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185282 | orchestrator | 2025-02-19 09:04:54.185287 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.185291 | orchestrator | Wednesday 19 February 2025 09:03:51 +0000 (0:00:00.455) 0:14:18.456 **** 2025-02-19 09:04:54.185296 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185301 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185306 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185311 | orchestrator | 2025-02-19 09:04:54.185315 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-19 09:04:54.185320 | orchestrator | Wednesday 19 February 2025 09:03:52 +0000 (0:00:00.394) 0:14:18.850 **** 2025-02-19 09:04:54.185325 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.185330 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185335 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.185339 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185344 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.185352 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185357 | orchestrator | 2025-02-19 09:04:54.185362 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-19 09:04:54.185367 | orchestrator | Wednesday 19 February 2025 09:03:52 +0000 (0:00:00.507) 0:14:19.357 **** 2025-02-19 09:04:54.185372 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185376 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185381 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185386 | orchestrator | 2025-02-19 09:04:54.185391 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:04:54.185395 | orchestrator | Wednesday 19 February 2025 09:03:53 +0000 (0:00:00.728) 0:14:20.086 **** 2025-02-19 09:04:54.185400 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185405 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185410 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185414 | orchestrator | 2025-02-19 09:04:54.185419 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-19 09:04:54.185424 | orchestrator | Wednesday 19 February 2025 09:03:53 +0000 (0:00:00.352) 0:14:20.439 **** 2025-02-19 09:04:54.185429 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:04:54.185433 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185438 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:04:54.185443 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185448 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:04:54.185453 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185458 | orchestrator | 2025-02-19 09:04:54.185462 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-19 09:04:54.185467 | orchestrator | Wednesday 19 February 2025 09:03:54 +0000 (0:00:00.516) 0:14:20.955 **** 2025-02-19 09:04:54.185472 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.185477 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185482 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.185487 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185491 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-19 09:04:54.185496 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185501 | orchestrator | 2025-02-19 09:04:54.185506 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-19 09:04:54.185511 | orchestrator | Wednesday 19 February 2025 09:03:54 +0000 (0:00:00.357) 0:14:21.313 **** 2025-02-19 09:04:54.185515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.185520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.185525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.185530 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185534 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-19 09:04:54.185539 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-19 09:04:54.185544 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-19 09:04:54.185549 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185553 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-19 09:04:54.185558 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-19 09:04:54.185563 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-19 09:04:54.185568 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185573 | orchestrator | 2025-02-19 09:04:54.185580 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-19 09:04:54.185588 | orchestrator | Wednesday 19 February 2025 09:03:55 +0000 (0:00:01.019) 0:14:22.332 **** 2025-02-19 09:04:54.185593 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185597 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185602 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185607 | orchestrator | 2025-02-19 09:04:54.185612 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-19 09:04:54.185616 | orchestrator | Wednesday 19 February 2025 09:03:56 +0000 (0:00:00.608) 0:14:22.940 **** 2025-02-19 09:04:54.185621 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-19 09:04:54.185626 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185631 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-19 09:04:54.185636 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185641 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-19 09:04:54.185645 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185650 | orchestrator | 2025-02-19 09:04:54.185657 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-19 09:04:54.185663 | orchestrator | Wednesday 19 February 2025 09:03:57 +0000 (0:00:00.885) 0:14:23.826 **** 2025-02-19 09:04:54.185667 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185672 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185677 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185682 | orchestrator | 2025-02-19 09:04:54.185687 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-19 09:04:54.185692 | orchestrator | Wednesday 19 February 2025 09:03:57 +0000 (0:00:00.617) 0:14:24.444 **** 2025-02-19 09:04:54.185696 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185701 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185706 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185711 | orchestrator | 2025-02-19 09:04:54.185716 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-02-19 09:04:54.185720 | orchestrator | Wednesday 19 February 2025 09:03:58 +0000 (0:00:01.028) 0:14:25.473 **** 2025-02-19 09:04:54.185725 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.185730 | orchestrator | 2025-02-19 09:04:54.185735 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-02-19 09:04:54.185739 | orchestrator | Wednesday 19 February 2025 09:03:59 +0000 (0:00:00.618) 0:14:26.091 **** 2025-02-19 09:04:54.185744 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-02-19 09:04:54.185749 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-02-19 09:04:54.185754 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-02-19 09:04:54.185759 | orchestrator | 2025-02-19 09:04:54.185764 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-02-19 09:04:54.185768 | orchestrator | Wednesday 19 February 2025 09:04:00 +0000 (0:00:01.065) 0:14:27.157 **** 2025-02-19 09:04:54.185773 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:04:54.185781 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-19 09:04:54.185786 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-19 09:04:54.185791 | orchestrator | 2025-02-19 09:04:54.185795 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-02-19 09:04:54.185802 | orchestrator | Wednesday 19 February 2025 09:04:02 +0000 (0:00:01.995) 0:14:29.152 **** 2025-02-19 09:04:54.185807 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-19 09:04:54.185812 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-19 09:04:54.185817 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.185821 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-19 09:04:54.185826 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-19 09:04:54.185831 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.185836 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-19 09:04:54.185845 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-19 09:04:54.185849 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.185854 | orchestrator | 2025-02-19 09:04:54.185859 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-02-19 09:04:54.185864 | orchestrator | Wednesday 19 February 2025 09:04:03 +0000 (0:00:01.247) 0:14:30.399 **** 2025-02-19 09:04:54.185869 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185873 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185878 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185883 | orchestrator | 2025-02-19 09:04:54.185888 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-02-19 09:04:54.185892 | orchestrator | Wednesday 19 February 2025 09:04:03 +0000 (0:00:00.352) 0:14:30.752 **** 2025-02-19 09:04:54.185897 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185902 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.185907 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.185911 | orchestrator | 2025-02-19 09:04:54.185916 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-02-19 09:04:54.185921 | orchestrator | Wednesday 19 February 2025 09:04:04 +0000 (0:00:00.623) 0:14:31.375 **** 2025-02-19 09:04:54.185926 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-02-19 09:04:54.185931 | orchestrator | 2025-02-19 09:04:54.185935 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-02-19 09:04:54.185940 | orchestrator | Wednesday 19 February 2025 09:04:04 +0000 (0:00:00.273) 0:14:31.648 **** 2025-02-19 09:04:54.185945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.185952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.185957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.185962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.185967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.185971 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.185976 | orchestrator | 2025-02-19 09:04:54.185981 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-02-19 09:04:54.185986 | orchestrator | Wednesday 19 February 2025 09:04:05 +0000 (0:00:00.715) 0:14:32.364 **** 2025-02-19 09:04:54.185990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.185997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.186002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.186007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.186027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.186033 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.186038 | orchestrator | 2025-02-19 09:04:54.186042 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-02-19 09:04:54.186047 | orchestrator | Wednesday 19 February 2025 09:04:06 +0000 (0:00:01.012) 0:14:33.377 **** 2025-02-19 09:04:54.186052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.186060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.186065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.186070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.186075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-19 09:04:54.186079 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.186084 | orchestrator | 2025-02-19 09:04:54.186089 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-02-19 09:04:54.186094 | orchestrator | Wednesday 19 February 2025 09:04:07 +0000 (0:00:01.049) 0:14:34.426 **** 2025-02-19 09:04:54.186099 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-19 09:04:54.186104 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-19 09:04:54.186109 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-19 09:04:54.186113 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-19 09:04:54.186118 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-19 09:04:54.186135 | orchestrator | 2025-02-19 09:04:54.186140 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-02-19 09:04:54.186145 | orchestrator | Wednesday 19 February 2025 09:04:34 +0000 (0:00:26.848) 0:15:01.275 **** 2025-02-19 09:04:54.186150 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.186154 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.186159 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.186164 | orchestrator | 2025-02-19 09:04:54.186169 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-02-19 09:04:54.186174 | orchestrator | Wednesday 19 February 2025 09:04:34 +0000 (0:00:00.491) 0:15:01.767 **** 2025-02-19 09:04:54.186178 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.186183 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.186188 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.186193 | orchestrator | 2025-02-19 09:04:54.186198 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-02-19 09:04:54.186202 | orchestrator | Wednesday 19 February 2025 09:04:35 +0000 (0:00:00.382) 0:15:02.150 **** 2025-02-19 09:04:54.186207 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.186212 | orchestrator | 2025-02-19 09:04:54.186217 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-02-19 09:04:54.186221 | orchestrator | Wednesday 19 February 2025 09:04:35 +0000 (0:00:00.588) 0:15:02.738 **** 2025-02-19 09:04:54.186226 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.186231 | orchestrator | 2025-02-19 09:04:54.186236 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-02-19 09:04:54.186244 | orchestrator | Wednesday 19 February 2025 09:04:36 +0000 (0:00:00.910) 0:15:03.648 **** 2025-02-19 09:04:54.186251 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.186263 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.186271 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.186278 | orchestrator | 2025-02-19 09:04:54.186285 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-02-19 09:04:54.186293 | orchestrator | Wednesday 19 February 2025 09:04:38 +0000 (0:00:01.304) 0:15:04.952 **** 2025-02-19 09:04:54.186300 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.186307 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.186318 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.186326 | orchestrator | 2025-02-19 09:04:54.186332 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-02-19 09:04:54.186336 | orchestrator | Wednesday 19 February 2025 09:04:39 +0000 (0:00:01.258) 0:15:06.211 **** 2025-02-19 09:04:54.186341 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.186346 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.186351 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.186356 | orchestrator | 2025-02-19 09:04:54.186361 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-02-19 09:04:54.186366 | orchestrator | Wednesday 19 February 2025 09:04:41 +0000 (0:00:02.076) 0:15:08.288 **** 2025-02-19 09:04:54.186370 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-19 09:04:54.186375 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-19 09:04:54.186380 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-19 09:04:54.186385 | orchestrator | 2025-02-19 09:04:54.186393 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-02-19 09:04:54.186398 | orchestrator | Wednesday 19 February 2025 09:04:43 +0000 (0:00:02.034) 0:15:10.323 **** 2025-02-19 09:04:54.186402 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.186407 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:04:54.186412 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:04:54.186417 | orchestrator | 2025-02-19 09:04:54.186422 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-19 09:04:54.186426 | orchestrator | Wednesday 19 February 2025 09:04:44 +0000 (0:00:01.451) 0:15:11.774 **** 2025-02-19 09:04:54.186431 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.186436 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.186441 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.186446 | orchestrator | 2025-02-19 09:04:54.186450 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-19 09:04:54.186455 | orchestrator | Wednesday 19 February 2025 09:04:45 +0000 (0:00:00.727) 0:15:12.501 **** 2025-02-19 09:04:54.186460 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:04:54.186465 | orchestrator | 2025-02-19 09:04:54.186470 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-02-19 09:04:54.186474 | orchestrator | Wednesday 19 February 2025 09:04:46 +0000 (0:00:00.913) 0:15:13.414 **** 2025-02-19 09:04:54.186479 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.186484 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.186489 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.186494 | orchestrator | 2025-02-19 09:04:54.186499 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-02-19 09:04:54.186503 | orchestrator | Wednesday 19 February 2025 09:04:46 +0000 (0:00:00.355) 0:15:13.770 **** 2025-02-19 09:04:54.186508 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.186513 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.186518 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.186522 | orchestrator | 2025-02-19 09:04:54.186527 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-02-19 09:04:54.186536 | orchestrator | Wednesday 19 February 2025 09:04:48 +0000 (0:00:01.281) 0:15:15.051 **** 2025-02-19 09:04:54.186541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:04:54.186546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:04:54.186551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:04:54.186556 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:04:54.186560 | orchestrator | 2025-02-19 09:04:54.186565 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-02-19 09:04:54.186570 | orchestrator | Wednesday 19 February 2025 09:04:49 +0000 (0:00:01.390) 0:15:16.441 **** 2025-02-19 09:04:54.186575 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:04:54.186580 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:04:54.186585 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:04:54.186592 | orchestrator | 2025-02-19 09:04:54.186597 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-19 09:04:54.186602 | orchestrator | Wednesday 19 February 2025 09:04:50 +0000 (0:00:00.468) 0:15:16.910 **** 2025-02-19 09:04:54.186607 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:04:54.186612 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:04:54.186617 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:04:54.186621 | orchestrator | 2025-02-19 09:04:54.186626 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:04:54.186631 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-02-19 09:04:54.186637 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-02-19 09:04:54.186642 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-02-19 09:04:54.186647 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-02-19 09:04:54.186652 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-02-19 09:04:54.186659 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-02-19 09:04:57.003007 | orchestrator | 2025-02-19 09:04:57.003192 | orchestrator | 2025-02-19 09:04:57.003225 | orchestrator | 2025-02-19 09:04:57.003249 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:04:57.003274 | orchestrator | Wednesday 19 February 2025 09:04:51 +0000 (0:00:01.722) 0:15:18.633 **** 2025-02-19 09:04:57.003296 | orchestrator | =============================================================================== 2025-02-19 09:04:57.003316 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 39.53s 2025-02-19 09:04:57.003338 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:quincy image -- 38.28s 2025-02-19 09:04:57.003361 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 26.85s 2025-02-19 09:04:57.003384 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.63s 2025-02-19 09:04:57.003404 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.25s 2025-02-19 09:04:57.003426 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.78s 2025-02-19 09:04:57.003448 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.94s 2025-02-19 09:04:57.003470 | orchestrator | ceph-mon : fetch ceph initial keys ------------------------------------- 10.38s 2025-02-19 09:04:57.003490 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 9.24s 2025-02-19 09:04:57.003512 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 8.69s 2025-02-19 09:04:57.003568 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 7.61s 2025-02-19 09:04:57.003591 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.55s 2025-02-19 09:04:57.003632 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.20s 2025-02-19 09:04:57.003658 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 5.64s 2025-02-19 09:04:57.003679 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 5.63s 2025-02-19 09:04:57.003699 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 5.59s 2025-02-19 09:04:57.003720 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.43s 2025-02-19 09:04:57.003742 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 4.45s 2025-02-19 09:04:57.003785 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 4.27s 2025-02-19 09:04:57.003806 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 4.03s 2025-02-19 09:04:57.003840 | orchestrator | 2025-02-19 09:04:53 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:04:57.003862 | orchestrator | 2025-02-19 09:04:53 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:04:57.003908 | orchestrator | 2025-02-19 09:04:56 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:04:57.004484 | orchestrator | 2025-02-19 09:04:57 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:04:57.006665 | orchestrator | 2025-02-19 09:04:57 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state STARTED 2025-02-19 09:05:00.069115 | orchestrator | 2025-02-19 09:04:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:00.069333 | orchestrator | 2025-02-19 09:05:00 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:00.069517 | orchestrator | 2025-02-19 09:05:00 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:00.075388 | orchestrator | 2025-02-19 09:05:00 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:00.076478 | orchestrator | 2025-02-19 09:05:00 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:00.078242 | orchestrator | 2025-02-19 09:05:00 | INFO  | Task 1ba086de-64b4-495c-850c-b08dbc777d1d is in state SUCCESS 2025-02-19 09:05:00.080369 | orchestrator | 2025-02-19 09:05:00.080446 | orchestrator | 2025-02-19 09:05:00.080475 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-02-19 09:05:00.080499 | orchestrator | 2025-02-19 09:05:00.080523 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-02-19 09:05:00.080545 | orchestrator | Wednesday 19 February 2025 09:00:44 +0000 (0:00:00.108) 0:00:00.108 **** 2025-02-19 09:05:00.080568 | orchestrator | ok: [localhost] => { 2025-02-19 09:05:00.080594 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-02-19 09:05:00.080617 | orchestrator | } 2025-02-19 09:05:00.080639 | orchestrator | 2025-02-19 09:05:00.080662 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-02-19 09:05:00.080685 | orchestrator | Wednesday 19 February 2025 09:00:44 +0000 (0:00:00.043) 0:00:00.152 **** 2025-02-19 09:05:00.080709 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 1, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-02-19 09:05:00.080735 | orchestrator | ...ignoring 2025-02-19 09:05:00.080760 | orchestrator | 2025-02-19 09:05:00.080784 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-02-19 09:05:00.080808 | orchestrator | Wednesday 19 February 2025 09:00:46 +0000 (0:00:01.636) 0:00:01.789 **** 2025-02-19 09:05:00.080862 | orchestrator | skipping: [localhost] 2025-02-19 09:05:00.080886 | orchestrator | 2025-02-19 09:05:00.080912 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-02-19 09:05:00.080937 | orchestrator | Wednesday 19 February 2025 09:00:46 +0000 (0:00:00.075) 0:00:01.864 **** 2025-02-19 09:05:00.080960 | orchestrator | ok: [localhost] 2025-02-19 09:05:00.080984 | orchestrator | 2025-02-19 09:05:00.081010 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:05:00.081034 | orchestrator | 2025-02-19 09:05:00.081054 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:05:00.081068 | orchestrator | Wednesday 19 February 2025 09:00:46 +0000 (0:00:00.179) 0:00:02.043 **** 2025-02-19 09:05:00.081083 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.081097 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:05:00.081111 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:05:00.081159 | orchestrator | 2025-02-19 09:05:00.081178 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:05:00.081193 | orchestrator | Wednesday 19 February 2025 09:00:47 +0000 (0:00:00.754) 0:00:02.798 **** 2025-02-19 09:05:00.081207 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-02-19 09:05:00.081221 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-02-19 09:05:00.081236 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-02-19 09:05:00.081250 | orchestrator | 2025-02-19 09:05:00.081264 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-02-19 09:05:00.081278 | orchestrator | 2025-02-19 09:05:00.081292 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-02-19 09:05:00.081306 | orchestrator | Wednesday 19 February 2025 09:00:47 +0000 (0:00:00.775) 0:00:03.574 **** 2025-02-19 09:05:00.081320 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-19 09:05:00.081334 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-19 09:05:00.081348 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-19 09:05:00.081362 | orchestrator | 2025-02-19 09:05:00.081377 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-19 09:05:00.081391 | orchestrator | Wednesday 19 February 2025 09:00:48 +0000 (0:00:00.960) 0:00:04.534 **** 2025-02-19 09:05:00.081405 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:05:00.081421 | orchestrator | 2025-02-19 09:05:00.081435 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-02-19 09:05:00.081449 | orchestrator | Wednesday 19 February 2025 09:00:49 +0000 (0:00:00.720) 0:00:05.255 **** 2025-02-19 09:05:00.081486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:05:00.081517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:05:00.081535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-19 09:05:00.081551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-19 09:05:00.081576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:05:00.081599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-19 09:05:00.081614 | orchestrator | 2025-02-19 09:05:00.081629 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-02-19 09:05:00.081643 | orchestrator | Wednesday 19 February 2025 09:00:55 +0000 (0:00:06.280) 0:00:11.535 **** 2025-02-19 09:05:00.081658 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.081674 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.081688 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.081702 | orchestrator | 2025-02-19 09:05:00.081731 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-02-19 09:05:00.081747 | orchestrator | Wednesday 19 February 2025 09:00:57 +0000 (0:00:01.211) 0:00:12.747 **** 2025-02-19 09:05:00.081771 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.081794 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.081818 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.081840 | orchestrator | 2025-02-19 09:05:00.081864 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-02-19 09:05:00.081889 | orchestrator | Wednesday 19 February 2025 09:00:59 +0000 (0:00:02.005) 0:00:14.752 **** 2025-02-19 09:05:00.081925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:05:00.081964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:05:00.081989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:05:00.082111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-19 09:05:00.082172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-19 09:05:00.082197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-19 09:05:00.082213 | orchestrator | 2025-02-19 09:05:00.082228 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-02-19 09:05:00.082243 | orchestrator | Wednesday 19 February 2025 09:01:06 +0000 (0:00:07.868) 0:00:22.621 **** 2025-02-19 09:05:00.082257 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.082272 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.082286 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.082300 | orchestrator | 2025-02-19 09:05:00.082314 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-02-19 09:05:00.082328 | orchestrator | Wednesday 19 February 2025 09:01:08 +0000 (0:00:01.509) 0:00:24.130 **** 2025-02-19 09:05:00.082342 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.082356 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:05:00.082370 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:05:00.082384 | orchestrator | 2025-02-19 09:05:00.082398 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-02-19 09:05:00.082412 | orchestrator | Wednesday 19 February 2025 09:01:20 +0000 (0:00:11.861) 0:00:35.992 **** 2025-02-19 09:05:00.082427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:05:00.082470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:05:00.082488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-19 09:05:00.082511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-19 09:05:00.082535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-19 09:05:00.082550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-19 09:05:00.082565 | orchestrator | 2025-02-19 09:05:00.082579 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-02-19 09:05:00.082594 | orchestrator | Wednesday 19 February 2025 09:01:26 +0000 (0:00:06.484) 0:00:42.477 **** 2025-02-19 09:05:00.082608 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.082622 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:05:00.082636 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:05:00.082650 | orchestrator | 2025-02-19 09:05:00.082664 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-02-19 09:05:00.082678 | orchestrator | Wednesday 19 February 2025 09:01:28 +0000 (0:00:01.431) 0:00:43.908 **** 2025-02-19 09:05:00.082692 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.082707 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:05:00.082721 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:05:00.082736 | orchestrator | 2025-02-19 09:05:00.082750 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-02-19 09:05:00.082764 | orchestrator | Wednesday 19 February 2025 09:01:28 +0000 (0:00:00.709) 0:00:44.617 **** 2025-02-19 09:05:00.082778 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.082793 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:05:00.082807 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:05:00.082829 | orchestrator | 2025-02-19 09:05:00.082854 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-02-19 09:05:00.082877 | orchestrator | Wednesday 19 February 2025 09:01:29 +0000 (0:00:00.329) 0:00:44.947 **** 2025-02-19 09:05:00.082900 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-02-19 09:05:00.082937 | orchestrator | ...ignoring 2025-02-19 09:05:00.082961 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-02-19 09:05:00.082982 | orchestrator | ...ignoring 2025-02-19 09:05:00.082997 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-02-19 09:05:00.083012 | orchestrator | ...ignoring 2025-02-19 09:05:00.083026 | orchestrator | 2025-02-19 09:05:00.083040 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-02-19 09:05:00.083054 | orchestrator | Wednesday 19 February 2025 09:01:40 +0000 (0:00:10.945) 0:00:55.892 **** 2025-02-19 09:05:00.083068 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.083082 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:05:00.083096 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:05:00.083110 | orchestrator | 2025-02-19 09:05:00.083124 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-02-19 09:05:00.083205 | orchestrator | Wednesday 19 February 2025 09:01:40 +0000 (0:00:00.753) 0:00:56.645 **** 2025-02-19 09:05:00.083220 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:05:00.083234 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.083248 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.083262 | orchestrator | 2025-02-19 09:05:00.083276 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-02-19 09:05:00.083290 | orchestrator | Wednesday 19 February 2025 09:01:41 +0000 (0:00:00.835) 0:00:57.481 **** 2025-02-19 09:05:00.083303 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:05:00.083315 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.083328 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.083341 | orchestrator | 2025-02-19 09:05:00.083353 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-02-19 09:05:00.083366 | orchestrator | Wednesday 19 February 2025 09:01:42 +0000 (0:00:00.898) 0:00:58.379 **** 2025-02-19 09:05:00.083378 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:05:00.083390 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.083412 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.083432 | orchestrator | 2025-02-19 09:05:00.083453 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-02-19 09:05:00.083472 | orchestrator | Wednesday 19 February 2025 09:01:43 +0000 (0:00:00.955) 0:00:59.335 **** 2025-02-19 09:05:00.083492 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.083514 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:05:00.083532 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:05:00.083553 | orchestrator | 2025-02-19 09:05:00.083575 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-02-19 09:05:00.083597 | orchestrator | Wednesday 19 February 2025 09:01:44 +0000 (0:00:01.067) 0:01:00.403 **** 2025-02-19 09:05:00.083619 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:05:00.083632 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.083645 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.083658 | orchestrator | 2025-02-19 09:05:00.083671 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-19 09:05:00.083691 | orchestrator | Wednesday 19 February 2025 09:01:45 +0000 (0:00:00.507) 0:01:00.910 **** 2025-02-19 09:05:00.083704 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.083716 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.083736 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-02-19 09:05:00.083749 | orchestrator | 2025-02-19 09:05:00.083762 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-02-19 09:05:00.083774 | orchestrator | Wednesday 19 February 2025 09:01:45 +0000 (0:00:00.645) 0:01:01.556 **** 2025-02-19 09:05:00.083786 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.083799 | orchestrator | 2025-02-19 09:05:00.083811 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-02-19 09:05:00.083831 | orchestrator | Wednesday 19 February 2025 09:01:58 +0000 (0:00:12.863) 0:01:14.419 **** 2025-02-19 09:05:00.083844 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.083861 | orchestrator | 2025-02-19 09:05:00.083874 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-19 09:05:00.083886 | orchestrator | Wednesday 19 February 2025 09:01:58 +0000 (0:00:00.117) 0:01:14.536 **** 2025-02-19 09:05:00.083899 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:05:00.083911 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.083924 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.083937 | orchestrator | 2025-02-19 09:05:00.083949 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-02-19 09:05:00.083961 | orchestrator | Wednesday 19 February 2025 09:02:00 +0000 (0:00:01.210) 0:01:15.747 **** 2025-02-19 09:05:00.083974 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.083986 | orchestrator | 2025-02-19 09:05:00.083999 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-02-19 09:05:00.084011 | orchestrator | Wednesday 19 February 2025 09:02:14 +0000 (0:00:14.154) 0:01:29.901 **** 2025-02-19 09:05:00.084023 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-02-19 09:05:00.084036 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.084048 | orchestrator | 2025-02-19 09:05:00.084061 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-02-19 09:05:00.084073 | orchestrator | Wednesday 19 February 2025 09:02:21 +0000 (0:00:07.362) 0:01:37.263 **** 2025-02-19 09:05:00.084086 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.084098 | orchestrator | 2025-02-19 09:05:00.084111 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-02-19 09:05:00.084124 | orchestrator | Wednesday 19 February 2025 09:02:24 +0000 (0:00:02.928) 0:01:40.191 **** 2025-02-19 09:05:00.084154 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.084166 | orchestrator | 2025-02-19 09:05:00.084179 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-02-19 09:05:00.084191 | orchestrator | Wednesday 19 February 2025 09:02:24 +0000 (0:00:00.099) 0:01:40.291 **** 2025-02-19 09:05:00.084204 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:05:00.084216 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.084229 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.084241 | orchestrator | 2025-02-19 09:05:00.084254 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-02-19 09:05:00.084267 | orchestrator | Wednesday 19 February 2025 09:02:24 +0000 (0:00:00.364) 0:01:40.656 **** 2025-02-19 09:05:00.084279 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:05:00.084292 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:05:00.084304 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:05:00.084317 | orchestrator | 2025-02-19 09:05:00.084329 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-02-19 09:05:00.084341 | orchestrator | Wednesday 19 February 2025 09:02:25 +0000 (0:00:00.305) 0:01:40.962 **** 2025-02-19 09:05:00.084354 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-02-19 09:05:00.084366 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.084378 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:05:00.084391 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:05:00.084403 | orchestrator | 2025-02-19 09:05:00.084416 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-02-19 09:05:00.084428 | orchestrator | skipping: no hosts matched 2025-02-19 09:05:00.084441 | orchestrator | 2025-02-19 09:05:00.084453 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-19 09:05:00.084465 | orchestrator | 2025-02-19 09:05:00.084478 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-19 09:05:00.084490 | orchestrator | Wednesday 19 February 2025 09:02:50 +0000 (0:00:24.920) 0:02:05.882 **** 2025-02-19 09:05:00.084509 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:05:00.084522 | orchestrator | 2025-02-19 09:05:00.084534 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-19 09:05:00.084547 | orchestrator | Wednesday 19 February 2025 09:03:08 +0000 (0:00:18.767) 0:02:24.650 **** 2025-02-19 09:05:00.084559 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:05:00.084572 | orchestrator | 2025-02-19 09:05:00.084584 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-19 09:05:00.084596 | orchestrator | Wednesday 19 February 2025 09:03:29 +0000 (0:00:20.940) 0:02:45.591 **** 2025-02-19 09:05:00.084609 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:05:00.084622 | orchestrator | 2025-02-19 09:05:00.084634 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-19 09:05:00.084646 | orchestrator | 2025-02-19 09:05:00.084659 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-19 09:05:00.084676 | orchestrator | Wednesday 19 February 2025 09:03:32 +0000 (0:00:03.065) 0:02:48.657 **** 2025-02-19 09:05:00.084689 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:05:00.084701 | orchestrator | 2025-02-19 09:05:00.084713 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-19 09:05:00.084726 | orchestrator | Wednesday 19 February 2025 09:03:52 +0000 (0:00:19.302) 0:03:07.959 **** 2025-02-19 09:05:00.084738 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:05:00.084751 | orchestrator | 2025-02-19 09:05:00.084764 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-19 09:05:00.084781 | orchestrator | Wednesday 19 February 2025 09:04:13 +0000 (0:00:20.909) 0:03:28.868 **** 2025-02-19 09:05:00.084794 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:05:00.084807 | orchestrator | 2025-02-19 09:05:00.084820 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-02-19 09:05:00.084833 | orchestrator | 2025-02-19 09:05:00.084857 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-19 09:05:00.084880 | orchestrator | Wednesday 19 February 2025 09:04:16 +0000 (0:00:02.945) 0:03:31.814 **** 2025-02-19 09:05:00.084902 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.084922 | orchestrator | 2025-02-19 09:05:00.084944 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-19 09:05:00.084966 | orchestrator | Wednesday 19 February 2025 09:04:33 +0000 (0:00:17.645) 0:03:49.460 **** 2025-02-19 09:05:00.084990 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.085016 | orchestrator | 2025-02-19 09:05:00.085035 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-19 09:05:00.085048 | orchestrator | Wednesday 19 February 2025 09:04:38 +0000 (0:00:04.956) 0:03:54.417 **** 2025-02-19 09:05:00.085061 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.085073 | orchestrator | 2025-02-19 09:05:00.085086 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-02-19 09:05:00.085098 | orchestrator | 2025-02-19 09:05:00.085111 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-02-19 09:05:00.085123 | orchestrator | Wednesday 19 February 2025 09:04:41 +0000 (0:00:02.960) 0:03:57.377 **** 2025-02-19 09:05:00.085155 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:05:00.085168 | orchestrator | 2025-02-19 09:05:00.085180 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-02-19 09:05:00.085199 | orchestrator | Wednesday 19 February 2025 09:04:42 +0000 (0:00:00.838) 0:03:58.216 **** 2025-02-19 09:05:00.085212 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.085225 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.085238 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.085250 | orchestrator | 2025-02-19 09:05:00.085263 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-02-19 09:05:00.085276 | orchestrator | Wednesday 19 February 2025 09:04:45 +0000 (0:00:02.607) 0:04:00.824 **** 2025-02-19 09:05:00.085297 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.085309 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.085322 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.085334 | orchestrator | 2025-02-19 09:05:00.085347 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-02-19 09:05:00.085359 | orchestrator | Wednesday 19 February 2025 09:04:47 +0000 (0:00:02.604) 0:04:03.428 **** 2025-02-19 09:05:00.085372 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.085384 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.085397 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.085409 | orchestrator | 2025-02-19 09:05:00.085422 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-02-19 09:05:00.085435 | orchestrator | Wednesday 19 February 2025 09:04:50 +0000 (0:00:02.841) 0:04:06.270 **** 2025-02-19 09:05:00.085447 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.085460 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.085472 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:05:00.085485 | orchestrator | 2025-02-19 09:05:00.085498 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-02-19 09:05:00.085510 | orchestrator | Wednesday 19 February 2025 09:04:53 +0000 (0:00:02.611) 0:04:08.881 **** 2025-02-19 09:05:00.085523 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:05:00.085535 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:05:00.085548 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:05:00.085560 | orchestrator | 2025-02-19 09:05:00.085573 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-02-19 09:05:00.085586 | orchestrator | Wednesday 19 February 2025 09:04:57 +0000 (0:00:03.785) 0:04:12.666 **** 2025-02-19 09:05:00.085598 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:05:00.085611 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:05:00.085623 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:05:00.085635 | orchestrator | 2025-02-19 09:05:00.085648 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:05:00.085661 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-02-19 09:05:00.085675 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-02-19 09:05:00.085688 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-02-19 09:05:00.085701 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-02-19 09:05:00.085714 | orchestrator | 2025-02-19 09:05:00.085726 | orchestrator | 2025-02-19 09:05:00.085739 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:05:00.085751 | orchestrator | Wednesday 19 February 2025 09:04:57 +0000 (0:00:00.245) 0:04:12.912 **** 2025-02-19 09:05:00.085764 | orchestrator | =============================================================================== 2025-02-19 09:05:00.085781 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.85s 2025-02-19 09:05:00.085794 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.07s 2025-02-19 09:05:00.085807 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 24.92s 2025-02-19 09:05:00.085819 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.65s 2025-02-19 09:05:00.085838 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 14.15s 2025-02-19 09:05:03.135496 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 12.86s 2025-02-19 09:05:03.135644 | orchestrator | mariadb : Copying over galera.cnf -------------------------------------- 11.86s 2025-02-19 09:05:03.135700 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.95s 2025-02-19 09:05:03.135719 | orchestrator | mariadb : Copying over config.json files for services ------------------- 7.87s 2025-02-19 09:05:03.135738 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.36s 2025-02-19 09:05:03.135755 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 6.49s 2025-02-19 09:05:03.135772 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 6.28s 2025-02-19 09:05:03.135789 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 6.01s 2025-02-19 09:05:03.135806 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.96s 2025-02-19 09:05:03.135821 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.79s 2025-02-19 09:05:03.135839 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.96s 2025-02-19 09:05:03.135854 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.93s 2025-02-19 09:05:03.135872 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.84s 2025-02-19 09:05:03.135889 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.61s 2025-02-19 09:05:03.135907 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.61s 2025-02-19 09:05:03.135925 | orchestrator | 2025-02-19 09:05:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:03.135964 | orchestrator | 2025-02-19 09:05:03 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:03.137376 | orchestrator | 2025-02-19 09:05:03 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:03.137432 | orchestrator | 2025-02-19 09:05:03 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:03.141493 | orchestrator | 2025-02-19 09:05:03 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:06.176308 | orchestrator | 2025-02-19 09:05:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:06.176441 | orchestrator | 2025-02-19 09:05:06 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:06.176781 | orchestrator | 2025-02-19 09:05:06 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:06.178502 | orchestrator | 2025-02-19 09:05:06 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:06.186538 | orchestrator | 2025-02-19 09:05:06 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:09.230055 | orchestrator | 2025-02-19 09:05:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:09.230161 | orchestrator | 2025-02-19 09:05:09 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:09.231715 | orchestrator | 2025-02-19 09:05:09 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:09.233510 | orchestrator | 2025-02-19 09:05:09 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:09.234897 | orchestrator | 2025-02-19 09:05:09 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:12.293750 | orchestrator | 2025-02-19 09:05:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:12.293888 | orchestrator | 2025-02-19 09:05:12 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:12.294580 | orchestrator | 2025-02-19 09:05:12 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:12.295688 | orchestrator | 2025-02-19 09:05:12 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:12.296789 | orchestrator | 2025-02-19 09:05:12 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:15.337997 | orchestrator | 2025-02-19 09:05:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:15.338157 | orchestrator | 2025-02-19 09:05:15 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:15.338325 | orchestrator | 2025-02-19 09:05:15 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:15.339012 | orchestrator | 2025-02-19 09:05:15 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:15.339686 | orchestrator | 2025-02-19 09:05:15 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:15.339785 | orchestrator | 2025-02-19 09:05:15 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:18.380039 | orchestrator | 2025-02-19 09:05:18 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:18.384968 | orchestrator | 2025-02-19 09:05:18 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:18.386497 | orchestrator | 2025-02-19 09:05:18 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:18.391238 | orchestrator | 2025-02-19 09:05:18 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:21.437227 | orchestrator | 2025-02-19 09:05:18 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:21.437374 | orchestrator | 2025-02-19 09:05:21 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:21.438781 | orchestrator | 2025-02-19 09:05:21 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:21.439627 | orchestrator | 2025-02-19 09:05:21 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:21.439668 | orchestrator | 2025-02-19 09:05:21 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:24.516326 | orchestrator | 2025-02-19 09:05:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:24.516495 | orchestrator | 2025-02-19 09:05:24 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:24.518120 | orchestrator | 2025-02-19 09:05:24 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:24.518206 | orchestrator | 2025-02-19 09:05:24 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:24.521351 | orchestrator | 2025-02-19 09:05:24 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:27.582410 | orchestrator | 2025-02-19 09:05:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:27.582550 | orchestrator | 2025-02-19 09:05:27 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:27.583096 | orchestrator | 2025-02-19 09:05:27 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:27.583131 | orchestrator | 2025-02-19 09:05:27 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:27.583936 | orchestrator | 2025-02-19 09:05:27 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:30.642068 | orchestrator | 2025-02-19 09:05:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:30.642183 | orchestrator | 2025-02-19 09:05:30 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:30.643062 | orchestrator | 2025-02-19 09:05:30 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:30.643715 | orchestrator | 2025-02-19 09:05:30 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:30.644579 | orchestrator | 2025-02-19 09:05:30 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:33.697528 | orchestrator | 2025-02-19 09:05:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:33.697661 | orchestrator | 2025-02-19 09:05:33 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:33.698362 | orchestrator | 2025-02-19 09:05:33 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:33.700990 | orchestrator | 2025-02-19 09:05:33 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:33.701723 | orchestrator | 2025-02-19 09:05:33 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:36.738351 | orchestrator | 2025-02-19 09:05:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:36.738554 | orchestrator | 2025-02-19 09:05:36 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:36.740092 | orchestrator | 2025-02-19 09:05:36 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:36.740134 | orchestrator | 2025-02-19 09:05:36 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:36.740936 | orchestrator | 2025-02-19 09:05:36 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:39.788004 | orchestrator | 2025-02-19 09:05:36 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:39.788095 | orchestrator | 2025-02-19 09:05:39 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:39.788817 | orchestrator | 2025-02-19 09:05:39 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:39.789917 | orchestrator | 2025-02-19 09:05:39 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:39.791234 | orchestrator | 2025-02-19 09:05:39 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:39.792614 | orchestrator | 2025-02-19 09:05:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:42.829675 | orchestrator | 2025-02-19 09:05:42 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:42.830537 | orchestrator | 2025-02-19 09:05:42 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:42.830614 | orchestrator | 2025-02-19 09:05:42 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:42.830640 | orchestrator | 2025-02-19 09:05:42 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:45.883193 | orchestrator | 2025-02-19 09:05:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:45.883346 | orchestrator | 2025-02-19 09:05:45 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:45.888722 | orchestrator | 2025-02-19 09:05:45 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:45.888822 | orchestrator | 2025-02-19 09:05:45 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:45.888857 | orchestrator | 2025-02-19 09:05:45 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:48.930961 | orchestrator | 2025-02-19 09:05:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:48.931114 | orchestrator | 2025-02-19 09:05:48 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:48.931760 | orchestrator | 2025-02-19 09:05:48 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:48.931782 | orchestrator | 2025-02-19 09:05:48 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:48.931806 | orchestrator | 2025-02-19 09:05:48 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:51.977839 | orchestrator | 2025-02-19 09:05:48 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:51.977968 | orchestrator | 2025-02-19 09:05:51 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:51.978692 | orchestrator | 2025-02-19 09:05:51 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:51.980661 | orchestrator | 2025-02-19 09:05:51 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:51.981058 | orchestrator | 2025-02-19 09:05:51 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:51.981089 | orchestrator | 2025-02-19 09:05:51 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:55.056341 | orchestrator | 2025-02-19 09:05:55 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:55.057304 | orchestrator | 2025-02-19 09:05:55 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:55.058236 | orchestrator | 2025-02-19 09:05:55 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:55.059299 | orchestrator | 2025-02-19 09:05:55 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:05:58.113681 | orchestrator | 2025-02-19 09:05:55 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:05:58.113854 | orchestrator | 2025-02-19 09:05:58 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:05:58.115064 | orchestrator | 2025-02-19 09:05:58 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:05:58.115115 | orchestrator | 2025-02-19 09:05:58 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:05:58.115877 | orchestrator | 2025-02-19 09:05:58 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:01.176377 | orchestrator | 2025-02-19 09:05:58 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:01.176500 | orchestrator | 2025-02-19 09:06:01 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:01.178442 | orchestrator | 2025-02-19 09:06:01 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:01.180105 | orchestrator | 2025-02-19 09:06:01 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:01.181197 | orchestrator | 2025-02-19 09:06:01 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:04.225701 | orchestrator | 2025-02-19 09:06:01 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:04.225853 | orchestrator | 2025-02-19 09:06:04 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:04.226737 | orchestrator | 2025-02-19 09:06:04 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:04.226773 | orchestrator | 2025-02-19 09:06:04 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:04.227954 | orchestrator | 2025-02-19 09:06:04 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:07.268469 | orchestrator | 2025-02-19 09:06:04 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:07.268646 | orchestrator | 2025-02-19 09:06:07 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:07.269439 | orchestrator | 2025-02-19 09:06:07 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:07.270659 | orchestrator | 2025-02-19 09:06:07 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:07.271773 | orchestrator | 2025-02-19 09:06:07 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:10.310553 | orchestrator | 2025-02-19 09:06:07 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:10.310719 | orchestrator | 2025-02-19 09:06:10 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:10.311392 | orchestrator | 2025-02-19 09:06:10 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:10.313014 | orchestrator | 2025-02-19 09:06:10 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:10.314238 | orchestrator | 2025-02-19 09:06:10 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:13.369991 | orchestrator | 2025-02-19 09:06:10 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:13.370239 | orchestrator | 2025-02-19 09:06:13 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:13.372794 | orchestrator | 2025-02-19 09:06:13 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:16.421679 | orchestrator | 2025-02-19 09:06:13 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:16.421812 | orchestrator | 2025-02-19 09:06:13 | INFO  | Task 5d71bb41-9b7a-4774-bf7d-839be4baa332 is in state STARTED 2025-02-19 09:06:16.421831 | orchestrator | 2025-02-19 09:06:13 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:16.421864 | orchestrator | 2025-02-19 09:06:13 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:16.421895 | orchestrator | 2025-02-19 09:06:16 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:16.424084 | orchestrator | 2025-02-19 09:06:16 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:16.426278 | orchestrator | 2025-02-19 09:06:16 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:16.427989 | orchestrator | 2025-02-19 09:06:16 | INFO  | Task 5d71bb41-9b7a-4774-bf7d-839be4baa332 is in state STARTED 2025-02-19 09:06:16.429475 | orchestrator | 2025-02-19 09:06:16 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:19.487992 | orchestrator | 2025-02-19 09:06:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:19.488137 | orchestrator | 2025-02-19 09:06:19 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:19.489354 | orchestrator | 2025-02-19 09:06:19 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:19.489934 | orchestrator | 2025-02-19 09:06:19 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:19.491477 | orchestrator | 2025-02-19 09:06:19 | INFO  | Task 5d71bb41-9b7a-4774-bf7d-839be4baa332 is in state STARTED 2025-02-19 09:06:19.493571 | orchestrator | 2025-02-19 09:06:19 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:22.559520 | orchestrator | 2025-02-19 09:06:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:22.559681 | orchestrator | 2025-02-19 09:06:22 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:22.560444 | orchestrator | 2025-02-19 09:06:22 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:22.562272 | orchestrator | 2025-02-19 09:06:22 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:22.562906 | orchestrator | 2025-02-19 09:06:22 | INFO  | Task 5d71bb41-9b7a-4774-bf7d-839be4baa332 is in state STARTED 2025-02-19 09:06:22.563856 | orchestrator | 2025-02-19 09:06:22 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:22.564484 | orchestrator | 2025-02-19 09:06:22 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:25.613501 | orchestrator | 2025-02-19 09:06:25 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:25.614299 | orchestrator | 2025-02-19 09:06:25 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:25.617401 | orchestrator | 2025-02-19 09:06:25 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:25.619244 | orchestrator | 2025-02-19 09:06:25 | INFO  | Task 5d71bb41-9b7a-4774-bf7d-839be4baa332 is in state STARTED 2025-02-19 09:06:25.622615 | orchestrator | 2025-02-19 09:06:25 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:28.678308 | orchestrator | 2025-02-19 09:06:25 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:28.678448 | orchestrator | 2025-02-19 09:06:28 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:28.681044 | orchestrator | 2025-02-19 09:06:28 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:28.682358 | orchestrator | 2025-02-19 09:06:28 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:28.683950 | orchestrator | 2025-02-19 09:06:28 | INFO  | Task 5d71bb41-9b7a-4774-bf7d-839be4baa332 is in state SUCCESS 2025-02-19 09:06:28.685942 | orchestrator | 2025-02-19 09:06:28 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:31.729441 | orchestrator | 2025-02-19 09:06:28 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:31.729576 | orchestrator | 2025-02-19 09:06:31 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:31.730316 | orchestrator | 2025-02-19 09:06:31 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:31.731735 | orchestrator | 2025-02-19 09:06:31 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:31.732582 | orchestrator | 2025-02-19 09:06:31 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:34.776518 | orchestrator | 2025-02-19 09:06:31 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:34.776649 | orchestrator | 2025-02-19 09:06:34 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:34.777113 | orchestrator | 2025-02-19 09:06:34 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:34.778951 | orchestrator | 2025-02-19 09:06:34 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:34.780429 | orchestrator | 2025-02-19 09:06:34 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:37.818297 | orchestrator | 2025-02-19 09:06:34 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:37.818479 | orchestrator | 2025-02-19 09:06:37 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:37.819053 | orchestrator | 2025-02-19 09:06:37 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:37.819227 | orchestrator | 2025-02-19 09:06:37 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:37.820843 | orchestrator | 2025-02-19 09:06:37 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:40.868583 | orchestrator | 2025-02-19 09:06:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:40.868750 | orchestrator | 2025-02-19 09:06:40 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:40.869625 | orchestrator | 2025-02-19 09:06:40 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:40.870927 | orchestrator | 2025-02-19 09:06:40 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:40.871464 | orchestrator | 2025-02-19 09:06:40 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:43.921336 | orchestrator | 2025-02-19 09:06:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:43.921469 | orchestrator | 2025-02-19 09:06:43 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:43.921773 | orchestrator | 2025-02-19 09:06:43 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:43.923625 | orchestrator | 2025-02-19 09:06:43 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:43.926241 | orchestrator | 2025-02-19 09:06:43 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:46.981394 | orchestrator | 2025-02-19 09:06:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:46.981573 | orchestrator | 2025-02-19 09:06:46 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:46.981974 | orchestrator | 2025-02-19 09:06:46 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:46.983403 | orchestrator | 2025-02-19 09:06:46 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:46.984649 | orchestrator | 2025-02-19 09:06:46 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:50.022681 | orchestrator | 2025-02-19 09:06:46 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:50.022826 | orchestrator | 2025-02-19 09:06:50 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:50.025688 | orchestrator | 2025-02-19 09:06:50 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:50.027422 | orchestrator | 2025-02-19 09:06:50 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:50.028809 | orchestrator | 2025-02-19 09:06:50 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:50.029099 | orchestrator | 2025-02-19 09:06:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:53.080236 | orchestrator | 2025-02-19 09:06:53 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:53.082545 | orchestrator | 2025-02-19 09:06:53 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:53.082629 | orchestrator | 2025-02-19 09:06:53 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:53.083487 | orchestrator | 2025-02-19 09:06:53 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state STARTED 2025-02-19 09:06:56.134080 | orchestrator | 2025-02-19 09:06:53 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:56.134265 | orchestrator | 2025-02-19 09:06:56 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:56.136225 | orchestrator | 2025-02-19 09:06:56 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:56.138575 | orchestrator | 2025-02-19 09:06:56 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:06:56.140089 | orchestrator | 2025-02-19 09:06:56 | INFO  | Task 48fe9c05-29cd-4b10-b207-9bfcdbddbef0 is in state SUCCESS 2025-02-19 09:06:56.141901 | orchestrator | 2025-02-19 09:06:56.142098 | orchestrator | None 2025-02-19 09:06:56.142319 | orchestrator | 2025-02-19 09:06:56.142343 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:06:56.142359 | orchestrator | 2025-02-19 09:06:56.142373 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:06:56.142387 | orchestrator | Wednesday 19 February 2025 09:05:01 +0000 (0:00:00.359) 0:00:00.359 **** 2025-02-19 09:06:56.142415 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.142432 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.142446 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.142460 | orchestrator | 2025-02-19 09:06:56.142474 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:06:56.142488 | orchestrator | Wednesday 19 February 2025 09:05:02 +0000 (0:00:00.485) 0:00:00.845 **** 2025-02-19 09:06:56.142502 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-02-19 09:06:56.142516 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-02-19 09:06:56.142530 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-02-19 09:06:56.142544 | orchestrator | 2025-02-19 09:06:56.142558 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-02-19 09:06:56.142572 | orchestrator | 2025-02-19 09:06:56.142586 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-19 09:06:56.142600 | orchestrator | Wednesday 19 February 2025 09:05:02 +0000 (0:00:00.352) 0:00:01.198 **** 2025-02-19 09:06:56.142614 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:06:56.142629 | orchestrator | 2025-02-19 09:06:56.142643 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-02-19 09:06:56.142657 | orchestrator | Wednesday 19 February 2025 09:05:03 +0000 (0:00:01.142) 0:00:02.340 **** 2025-02-19 09:06:56.142675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:06:56.142728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:06:56.142748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:06:56.142780 | orchestrator | 2025-02-19 09:06:56.142795 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-02-19 09:06:56.142814 | orchestrator | Wednesday 19 February 2025 09:05:05 +0000 (0:00:02.206) 0:00:04.547 **** 2025-02-19 09:06:56.142831 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.142854 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.142878 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.142900 | orchestrator | 2025-02-19 09:06:56.142922 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-19 09:06:56.142945 | orchestrator | Wednesday 19 February 2025 09:05:06 +0000 (0:00:00.310) 0:00:04.858 **** 2025-02-19 09:06:56.142980 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-19 09:06:56.143004 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-02-19 09:06:56.143029 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-02-19 09:06:56.143053 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-02-19 09:06:56.143077 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-02-19 09:06:56.143092 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-02-19 09:06:56.143109 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-19 09:06:56.143125 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-02-19 09:06:56.143140 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-02-19 09:06:56.143156 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-02-19 09:06:56.143202 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-02-19 09:06:56.143220 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-02-19 09:06:56.143237 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-19 09:06:56.143253 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-02-19 09:06:56.143266 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-02-19 09:06:56.143280 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-02-19 09:06:56.143293 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-02-19 09:06:56.143307 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-02-19 09:06:56.143332 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-02-19 09:06:56.143354 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-02-19 09:06:56.143369 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-02-19 09:06:56.143383 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-02-19 09:06:56.143397 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-02-19 09:06:56.143412 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ironic', 'enabled': True}) 2025-02-19 09:06:56.143426 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-02-19 09:06:56.143439 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-02-19 09:06:56.143453 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-02-19 09:06:56.143467 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-02-19 09:06:56.143481 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-02-19 09:06:56.143494 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-02-19 09:06:56.143508 | orchestrator | 2025-02-19 09:06:56.143522 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.143536 | orchestrator | Wednesday 19 February 2025 09:05:07 +0000 (0:00:01.228) 0:00:06.086 **** 2025-02-19 09:06:56.143550 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.143569 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.143583 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.143597 | orchestrator | 2025-02-19 09:06:56.143610 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.143624 | orchestrator | Wednesday 19 February 2025 09:05:07 +0000 (0:00:00.499) 0:00:06.586 **** 2025-02-19 09:06:56.143638 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.143652 | orchestrator | 2025-02-19 09:06:56.143665 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.143679 | orchestrator | Wednesday 19 February 2025 09:05:08 +0000 (0:00:00.195) 0:00:06.781 **** 2025-02-19 09:06:56.143700 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.143715 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.143729 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.143742 | orchestrator | 2025-02-19 09:06:56.143756 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.143770 | orchestrator | Wednesday 19 February 2025 09:05:08 +0000 (0:00:00.672) 0:00:07.454 **** 2025-02-19 09:06:56.143784 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.143798 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.143812 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.143826 | orchestrator | 2025-02-19 09:06:56.143840 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.143860 | orchestrator | Wednesday 19 February 2025 09:05:09 +0000 (0:00:00.392) 0:00:07.846 **** 2025-02-19 09:06:56.143874 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.143888 | orchestrator | 2025-02-19 09:06:56.143901 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.143920 | orchestrator | Wednesday 19 February 2025 09:05:09 +0000 (0:00:00.283) 0:00:08.130 **** 2025-02-19 09:06:56.143934 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.143948 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.143962 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.143986 | orchestrator | 2025-02-19 09:06:56.144009 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.144032 | orchestrator | Wednesday 19 February 2025 09:05:09 +0000 (0:00:00.293) 0:00:08.423 **** 2025-02-19 09:06:56.144055 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.144077 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.144100 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.144124 | orchestrator | 2025-02-19 09:06:56.144146 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.144169 | orchestrator | Wednesday 19 February 2025 09:05:10 +0000 (0:00:00.807) 0:00:09.230 **** 2025-02-19 09:06:56.144217 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.144241 | orchestrator | 2025-02-19 09:06:56.144265 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.144288 | orchestrator | Wednesday 19 February 2025 09:05:10 +0000 (0:00:00.158) 0:00:09.389 **** 2025-02-19 09:06:56.144303 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.144317 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.144330 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.144344 | orchestrator | 2025-02-19 09:06:56.144358 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.144372 | orchestrator | Wednesday 19 February 2025 09:05:11 +0000 (0:00:00.476) 0:00:09.866 **** 2025-02-19 09:06:56.144386 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.144400 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.144413 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.144427 | orchestrator | 2025-02-19 09:06:56.144441 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.144455 | orchestrator | Wednesday 19 February 2025 09:05:11 +0000 (0:00:00.523) 0:00:10.390 **** 2025-02-19 09:06:56.144468 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.144482 | orchestrator | 2025-02-19 09:06:56.144496 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.144510 | orchestrator | Wednesday 19 February 2025 09:05:11 +0000 (0:00:00.124) 0:00:10.514 **** 2025-02-19 09:06:56.144523 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.144537 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.144550 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.144564 | orchestrator | 2025-02-19 09:06:56.144578 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.144591 | orchestrator | Wednesday 19 February 2025 09:05:12 +0000 (0:00:00.496) 0:00:11.011 **** 2025-02-19 09:06:56.144605 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.144619 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.144633 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.144646 | orchestrator | 2025-02-19 09:06:56.144660 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.144674 | orchestrator | Wednesday 19 February 2025 09:05:12 +0000 (0:00:00.321) 0:00:11.332 **** 2025-02-19 09:06:56.144687 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.144701 | orchestrator | 2025-02-19 09:06:56.144714 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.144728 | orchestrator | Wednesday 19 February 2025 09:05:12 +0000 (0:00:00.252) 0:00:11.584 **** 2025-02-19 09:06:56.144742 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.144765 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.144779 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.144793 | orchestrator | 2025-02-19 09:06:56.144806 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.144820 | orchestrator | Wednesday 19 February 2025 09:05:13 +0000 (0:00:00.388) 0:00:11.972 **** 2025-02-19 09:06:56.144833 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.144847 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.144861 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.144874 | orchestrator | 2025-02-19 09:06:56.144888 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.144902 | orchestrator | Wednesday 19 February 2025 09:05:14 +0000 (0:00:01.118) 0:00:13.091 **** 2025-02-19 09:06:56.144915 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.144929 | orchestrator | 2025-02-19 09:06:56.144942 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.144956 | orchestrator | Wednesday 19 February 2025 09:05:14 +0000 (0:00:00.186) 0:00:13.277 **** 2025-02-19 09:06:56.144969 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.144983 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.144996 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.145010 | orchestrator | 2025-02-19 09:06:56.145024 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.145037 | orchestrator | Wednesday 19 February 2025 09:05:15 +0000 (0:00:00.598) 0:00:13.876 **** 2025-02-19 09:06:56.145051 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.145065 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.145079 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.145092 | orchestrator | 2025-02-19 09:06:56.145114 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.145128 | orchestrator | Wednesday 19 February 2025 09:05:15 +0000 (0:00:00.470) 0:00:14.346 **** 2025-02-19 09:06:56.145142 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.145155 | orchestrator | 2025-02-19 09:06:56.145169 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.145256 | orchestrator | Wednesday 19 February 2025 09:05:15 +0000 (0:00:00.173) 0:00:14.519 **** 2025-02-19 09:06:56.145270 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.145284 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.145297 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.145311 | orchestrator | 2025-02-19 09:06:56.145325 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.145345 | orchestrator | Wednesday 19 February 2025 09:05:16 +0000 (0:00:00.538) 0:00:15.058 **** 2025-02-19 09:06:56.145360 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.145373 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.145387 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.145402 | orchestrator | 2025-02-19 09:06:56.145415 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.145429 | orchestrator | Wednesday 19 February 2025 09:05:16 +0000 (0:00:00.466) 0:00:15.524 **** 2025-02-19 09:06:56.145443 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.145465 | orchestrator | 2025-02-19 09:06:56.145479 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.145493 | orchestrator | Wednesday 19 February 2025 09:05:17 +0000 (0:00:00.331) 0:00:15.856 **** 2025-02-19 09:06:56.145507 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.145522 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.145537 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.145550 | orchestrator | 2025-02-19 09:06:56.145564 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.145578 | orchestrator | Wednesday 19 February 2025 09:05:17 +0000 (0:00:00.303) 0:00:16.160 **** 2025-02-19 09:06:56.145592 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.145606 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.145630 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.145654 | orchestrator | 2025-02-19 09:06:56.145678 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.145701 | orchestrator | Wednesday 19 February 2025 09:05:18 +0000 (0:00:00.547) 0:00:16.707 **** 2025-02-19 09:06:56.145723 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.145745 | orchestrator | 2025-02-19 09:06:56.145767 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.145788 | orchestrator | Wednesday 19 February 2025 09:05:18 +0000 (0:00:00.269) 0:00:16.977 **** 2025-02-19 09:06:56.145807 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.145826 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.145839 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.145859 | orchestrator | 2025-02-19 09:06:56.145879 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.145899 | orchestrator | Wednesday 19 February 2025 09:05:19 +0000 (0:00:00.790) 0:00:17.767 **** 2025-02-19 09:06:56.145919 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.145940 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.145961 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.145982 | orchestrator | 2025-02-19 09:06:56.145995 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.146008 | orchestrator | Wednesday 19 February 2025 09:05:19 +0000 (0:00:00.531) 0:00:18.299 **** 2025-02-19 09:06:56.146050 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.146065 | orchestrator | 2025-02-19 09:06:56.146078 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.146090 | orchestrator | Wednesday 19 February 2025 09:05:19 +0000 (0:00:00.155) 0:00:18.454 **** 2025-02-19 09:06:56.146103 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.146115 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.146127 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.146140 | orchestrator | 2025-02-19 09:06:56.146152 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.146164 | orchestrator | Wednesday 19 February 2025 09:05:20 +0000 (0:00:00.534) 0:00:18.989 **** 2025-02-19 09:06:56.146204 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.146220 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.146233 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.146245 | orchestrator | 2025-02-19 09:06:56.146258 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.146270 | orchestrator | Wednesday 19 February 2025 09:05:20 +0000 (0:00:00.358) 0:00:19.347 **** 2025-02-19 09:06:56.146282 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.146295 | orchestrator | 2025-02-19 09:06:56.146307 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.146319 | orchestrator | Wednesday 19 February 2025 09:05:21 +0000 (0:00:00.626) 0:00:19.973 **** 2025-02-19 09:06:56.146331 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.146344 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.146356 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.146368 | orchestrator | 2025-02-19 09:06:56.146381 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-19 09:06:56.146393 | orchestrator | Wednesday 19 February 2025 09:05:22 +0000 (0:00:00.854) 0:00:20.828 **** 2025-02-19 09:06:56.146405 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:06:56.146418 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:06:56.146430 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:06:56.146442 | orchestrator | 2025-02-19 09:06:56.146455 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-19 09:06:56.146467 | orchestrator | Wednesday 19 February 2025 09:05:22 +0000 (0:00:00.656) 0:00:21.484 **** 2025-02-19 09:06:56.146480 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.146492 | orchestrator | 2025-02-19 09:06:56.146504 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-19 09:06:56.146531 | orchestrator | Wednesday 19 February 2025 09:05:23 +0000 (0:00:00.376) 0:00:21.860 **** 2025-02-19 09:06:56.146544 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.146557 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.146578 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.146591 | orchestrator | 2025-02-19 09:06:56.146604 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-02-19 09:06:56.146616 | orchestrator | Wednesday 19 February 2025 09:05:24 +0000 (0:00:00.922) 0:00:22.783 **** 2025-02-19 09:06:56.146628 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:06:56.146641 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:06:56.146653 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:06:56.146665 | orchestrator | 2025-02-19 09:06:56.146677 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-02-19 09:06:56.146696 | orchestrator | Wednesday 19 February 2025 09:05:28 +0000 (0:00:04.601) 0:00:27.384 **** 2025-02-19 09:06:56.146709 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-19 09:06:56.146721 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-19 09:06:56.146733 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-19 09:06:56.146746 | orchestrator | 2025-02-19 09:06:56.146758 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-02-19 09:06:56.146770 | orchestrator | Wednesday 19 February 2025 09:05:32 +0000 (0:00:03.915) 0:00:31.300 **** 2025-02-19 09:06:56.146783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-19 09:06:56.146799 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-19 09:06:56.146821 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-19 09:06:56.146841 | orchestrator | 2025-02-19 09:06:56.146862 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-02-19 09:06:56.146880 | orchestrator | Wednesday 19 February 2025 09:05:36 +0000 (0:00:04.000) 0:00:35.300 **** 2025-02-19 09:06:56.146899 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-19 09:06:56.146920 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-19 09:06:56.146941 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-19 09:06:56.146963 | orchestrator | 2025-02-19 09:06:56.146983 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-02-19 09:06:56.147005 | orchestrator | Wednesday 19 February 2025 09:05:40 +0000 (0:00:03.478) 0:00:38.779 **** 2025-02-19 09:06:56.147026 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.147046 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.147067 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.147088 | orchestrator | 2025-02-19 09:06:56.147109 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-02-19 09:06:56.147130 | orchestrator | Wednesday 19 February 2025 09:05:40 +0000 (0:00:00.455) 0:00:39.234 **** 2025-02-19 09:06:56.147145 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.147157 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.147169 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.147201 | orchestrator | 2025-02-19 09:06:56.147214 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-19 09:06:56.147226 | orchestrator | Wednesday 19 February 2025 09:05:41 +0000 (0:00:00.787) 0:00:40.021 **** 2025-02-19 09:06:56.147239 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:06:56.147251 | orchestrator | 2025-02-19 09:06:56.147263 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-02-19 09:06:56.147285 | orchestrator | Wednesday 19 February 2025 09:05:42 +0000 (0:00:01.083) 0:00:41.105 **** 2025-02-19 09:06:56.147310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:06:56.147325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:06:56.147359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:06:56.147378 | orchestrator | 2025-02-19 09:06:56.147391 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-02-19 09:06:56.147403 | orchestrator | Wednesday 19 February 2025 09:05:44 +0000 (0:00:02.368) 0:00:43.473 **** 2025-02-19 09:06:56.147416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-19 09:06:56.147435 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.147455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-19 09:06:56.147473 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.147486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-19 09:06:56.147506 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.147518 | orchestrator | 2025-02-19 09:06:56.147531 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-02-19 09:06:56.147543 | orchestrator | Wednesday 19 February 2025 09:05:46 +0000 (0:00:01.878) 0:00:45.351 **** 2025-02-19 09:06:56.147571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-19 09:06:56.147585 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.147608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-19 09:06:56.147627 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.147640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-19 09:06:56.147662 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.147675 | orchestrator | 2025-02-19 09:06:56.147687 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-02-19 09:06:56.147700 | orchestrator | Wednesday 19 February 2025 09:05:48 +0000 (0:00:01.822) 0:00:47.174 **** 2025-02-19 09:06:56.147719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:06:56.147733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:06:56.147768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-19 09:06:56.147782 | orchestrator | 2025-02-19 09:06:56.147802 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-19 09:06:56.147822 | orchestrator | Wednesday 19 February 2025 09:05:55 +0000 (0:00:06.840) 0:00:54.015 **** 2025-02-19 09:06:56.147840 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:06:56.147859 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:06:56.147881 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:06:56.147900 | orchestrator | 2025-02-19 09:06:56.147919 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-19 09:06:56.147938 | orchestrator | Wednesday 19 February 2025 09:05:56 +0000 (0:00:00.754) 0:00:54.770 **** 2025-02-19 09:06:56.147959 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:06:56.147978 | orchestrator | 2025-02-19 09:06:56.147998 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-02-19 09:06:56.148018 | orchestrator | Wednesday 19 February 2025 09:05:57 +0000 (0:00:00.970) 0:00:55.741 **** 2025-02-19 09:06:56.148052 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:06:56.148081 | orchestrator | 2025-02-19 09:06:56.148096 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-02-19 09:06:56.148108 | orchestrator | Wednesday 19 February 2025 09:06:00 +0000 (0:00:03.201) 0:00:58.942 **** 2025-02-19 09:06:56.148121 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:06:56.148133 | orchestrator | 2025-02-19 09:06:56.148145 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-02-19 09:06:56.148158 | orchestrator | Wednesday 19 February 2025 09:06:02 +0000 (0:00:02.671) 0:01:01.614 **** 2025-02-19 09:06:56.148187 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:06:56.148201 | orchestrator | 2025-02-19 09:06:56.148213 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-02-19 09:06:56.148226 | orchestrator | Wednesday 19 February 2025 09:06:20 +0000 (0:00:17.069) 0:01:18.683 **** 2025-02-19 09:06:56.148238 | orchestrator | 2025-02-19 09:06:56.148256 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-02-19 09:06:56.148268 | orchestrator | Wednesday 19 February 2025 09:06:20 +0000 (0:00:00.075) 0:01:18.758 **** 2025-02-19 09:06:56.148281 | orchestrator | 2025-02-19 09:06:56.148293 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-02-19 09:06:56.148305 | orchestrator | Wednesday 19 February 2025 09:06:20 +0000 (0:00:00.232) 0:01:18.991 **** 2025-02-19 09:06:56.148318 | orchestrator | 2025-02-19 09:06:56.148330 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-02-19 09:06:56.148342 | orchestrator | Wednesday 19 February 2025 09:06:20 +0000 (0:00:00.074) 0:01:19.065 **** 2025-02-19 09:06:56.148355 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:06:56.148367 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:06:56.148379 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:06:56.148391 | orchestrator | 2025-02-19 09:06:56.148404 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:06:56.148417 | orchestrator | testbed-node-0 : ok=41  changed=11  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-02-19 09:06:56.148431 | orchestrator | testbed-node-1 : ok=38  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-02-19 09:06:56.148443 | orchestrator | testbed-node-2 : ok=38  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-02-19 09:06:56.148456 | orchestrator | 2025-02-19 09:06:56.148468 | orchestrator | 2025-02-19 09:06:56.148480 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:06:56.148492 | orchestrator | Wednesday 19 February 2025 09:06:53 +0000 (0:00:33.228) 0:01:52.293 **** 2025-02-19 09:06:56.148505 | orchestrator | =============================================================================== 2025-02-19 09:06:56.148526 | orchestrator | horizon : Restart horizon container ------------------------------------ 33.23s 2025-02-19 09:06:56.148785 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.07s 2025-02-19 09:06:56.148827 | orchestrator | horizon : Deploy horizon container -------------------------------------- 6.84s 2025-02-19 09:06:56.148849 | orchestrator | horizon : Copying over config.json files for services ------------------- 4.60s 2025-02-19 09:06:56.148872 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 4.00s 2025-02-19 09:06:56.148896 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.92s 2025-02-19 09:06:56.148919 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 3.48s 2025-02-19 09:06:56.148941 | orchestrator | horizon : Creating Horizon database ------------------------------------- 3.20s 2025-02-19 09:06:56.148975 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.67s 2025-02-19 09:06:59.192636 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.37s 2025-02-19 09:06:59.192792 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 2.21s 2025-02-19 09:06:59.192813 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.88s 2025-02-19 09:06:59.192829 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.82s 2025-02-19 09:06:59.192843 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.23s 2025-02-19 09:06:59.192858 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.14s 2025-02-19 09:06:59.192872 | orchestrator | horizon : Update policy file name --------------------------------------- 1.12s 2025-02-19 09:06:59.192886 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.08s 2025-02-19 09:06:59.192900 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.97s 2025-02-19 09:06:59.192914 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.92s 2025-02-19 09:06:59.192928 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.85s 2025-02-19 09:06:59.192942 | orchestrator | 2025-02-19 09:06:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:06:59.192974 | orchestrator | 2025-02-19 09:06:59 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:06:59.193683 | orchestrator | 2025-02-19 09:06:59 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:06:59.193718 | orchestrator | 2025-02-19 09:06:59 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:02.258930 | orchestrator | 2025-02-19 09:06:59 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:02.259070 | orchestrator | 2025-02-19 09:07:02 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:02.259658 | orchestrator | 2025-02-19 09:07:02 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:07:02.261244 | orchestrator | 2025-02-19 09:07:02 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:05.304642 | orchestrator | 2025-02-19 09:07:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:05.304787 | orchestrator | 2025-02-19 09:07:05 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:05.306289 | orchestrator | 2025-02-19 09:07:05 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:07:05.307314 | orchestrator | 2025-02-19 09:07:05 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:08.352323 | orchestrator | 2025-02-19 09:07:05 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:08.352462 | orchestrator | 2025-02-19 09:07:08 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:11.383004 | orchestrator | 2025-02-19 09:07:08 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:07:11.383122 | orchestrator | 2025-02-19 09:07:08 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:11.383137 | orchestrator | 2025-02-19 09:07:08 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:11.383162 | orchestrator | 2025-02-19 09:07:11 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:14.420812 | orchestrator | 2025-02-19 09:07:11 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:07:14.420907 | orchestrator | 2025-02-19 09:07:11 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:14.420935 | orchestrator | 2025-02-19 09:07:11 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:14.420980 | orchestrator | 2025-02-19 09:07:14 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:14.422586 | orchestrator | 2025-02-19 09:07:14 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:07:14.422630 | orchestrator | 2025-02-19 09:07:14 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:17.464083 | orchestrator | 2025-02-19 09:07:14 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:17.464292 | orchestrator | 2025-02-19 09:07:17 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:17.464451 | orchestrator | 2025-02-19 09:07:17 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:07:17.465851 | orchestrator | 2025-02-19 09:07:17 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:17.465899 | orchestrator | 2025-02-19 09:07:17 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:20.505461 | orchestrator | 2025-02-19 09:07:20 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:20.507383 | orchestrator | 2025-02-19 09:07:20 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:07:20.509703 | orchestrator | 2025-02-19 09:07:20 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:20.510383 | orchestrator | 2025-02-19 09:07:20 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:23.556782 | orchestrator | 2025-02-19 09:07:23 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:23.558717 | orchestrator | 2025-02-19 09:07:23 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state STARTED 2025-02-19 09:07:23.558810 | orchestrator | 2025-02-19 09:07:23 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:26.595626 | orchestrator | 2025-02-19 09:07:23 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:26.595776 | orchestrator | 2025-02-19 09:07:26 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:26.596501 | orchestrator | 2025-02-19 09:07:26 | INFO  | Task cce779f2-7eea-4fba-87f7-27c8fd4ad1af is in state STARTED 2025-02-19 09:07:26.598125 | orchestrator | 2025-02-19 09:07:26 | INFO  | Task 9dacf295-734f-427b-b93d-4689cb67b27c is in state SUCCESS 2025-02-19 09:07:26.600836 | orchestrator | 2025-02-19 09:07:26.600878 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-19 09:07:26.600894 | orchestrator | 2025-02-19 09:07:26.600909 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-02-19 09:07:26.600923 | orchestrator | 2025-02-19 09:07:26.600952 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-02-19 09:07:26.600967 | orchestrator | Wednesday 19 February 2025 09:04:58 +0000 (0:00:01.345) 0:00:01.345 **** 2025-02-19 09:07:26.600983 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:07:26.600999 | orchestrator | 2025-02-19 09:07:26.601013 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-02-19 09:07:26.601027 | orchestrator | Wednesday 19 February 2025 09:04:58 +0000 (0:00:00.709) 0:00:02.054 **** 2025-02-19 09:07:26.601042 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-02-19 09:07:26.601057 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-02-19 09:07:26.601071 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-02-19 09:07:26.601085 | orchestrator | 2025-02-19 09:07:26.601099 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-02-19 09:07:26.601135 | orchestrator | Wednesday 19 February 2025 09:04:59 +0000 (0:00:00.957) 0:00:03.012 **** 2025-02-19 09:07:26.601150 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:07:26.601164 | orchestrator | 2025-02-19 09:07:26.601178 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-02-19 09:07:26.601261 | orchestrator | Wednesday 19 February 2025 09:05:00 +0000 (0:00:00.856) 0:00:03.868 **** 2025-02-19 09:07:26.601276 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.601291 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.601306 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.601320 | orchestrator | 2025-02-19 09:07:26.601334 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-02-19 09:07:26.601348 | orchestrator | Wednesday 19 February 2025 09:05:01 +0000 (0:00:00.716) 0:00:04.585 **** 2025-02-19 09:07:26.601361 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.601375 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.601389 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.601403 | orchestrator | 2025-02-19 09:07:26.601417 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-02-19 09:07:26.601434 | orchestrator | Wednesday 19 February 2025 09:05:01 +0000 (0:00:00.322) 0:00:04.907 **** 2025-02-19 09:07:26.601451 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.601466 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.601482 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.601498 | orchestrator | 2025-02-19 09:07:26.601515 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-02-19 09:07:26.601531 | orchestrator | Wednesday 19 February 2025 09:05:02 +0000 (0:00:00.929) 0:00:05.837 **** 2025-02-19 09:07:26.601546 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.601562 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.601578 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.601593 | orchestrator | 2025-02-19 09:07:26.601610 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-02-19 09:07:26.601626 | orchestrator | Wednesday 19 February 2025 09:05:03 +0000 (0:00:00.506) 0:00:06.343 **** 2025-02-19 09:07:26.601642 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.601657 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.601673 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.601690 | orchestrator | 2025-02-19 09:07:26.601705 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-02-19 09:07:26.601721 | orchestrator | Wednesday 19 February 2025 09:05:03 +0000 (0:00:00.540) 0:00:06.884 **** 2025-02-19 09:07:26.601736 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.601752 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.601767 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.601783 | orchestrator | 2025-02-19 09:07:26.601799 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-02-19 09:07:26.601813 | orchestrator | Wednesday 19 February 2025 09:05:04 +0000 (0:00:00.532) 0:00:07.417 **** 2025-02-19 09:07:26.601827 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.601841 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.601855 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.601869 | orchestrator | 2025-02-19 09:07:26.601882 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-02-19 09:07:26.601896 | orchestrator | Wednesday 19 February 2025 09:05:04 +0000 (0:00:00.601) 0:00:08.019 **** 2025-02-19 09:07:26.601910 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.601924 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.601938 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.601952 | orchestrator | 2025-02-19 09:07:26.601966 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-02-19 09:07:26.601980 | orchestrator | Wednesday 19 February 2025 09:05:05 +0000 (0:00:00.346) 0:00:08.365 **** 2025-02-19 09:07:26.601994 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-19 09:07:26.602061 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-19 09:07:26.602079 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-19 09:07:26.602093 | orchestrator | 2025-02-19 09:07:26.602108 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-02-19 09:07:26.602122 | orchestrator | Wednesday 19 February 2025 09:05:06 +0000 (0:00:00.800) 0:00:09.166 **** 2025-02-19 09:07:26.602135 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.602150 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.602164 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.602178 | orchestrator | 2025-02-19 09:07:26.602213 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-02-19 09:07:26.602228 | orchestrator | Wednesday 19 February 2025 09:05:06 +0000 (0:00:00.649) 0:00:09.816 **** 2025-02-19 09:07:26.602255 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-19 09:07:26.602277 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-19 09:07:26.602292 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-19 09:07:26.602307 | orchestrator | 2025-02-19 09:07:26.602320 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-02-19 09:07:26.602334 | orchestrator | Wednesday 19 February 2025 09:05:09 +0000 (0:00:02.600) 0:00:12.416 **** 2025-02-19 09:07:26.602349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-19 09:07:26.602363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-19 09:07:26.602377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-19 09:07:26.602391 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.602406 | orchestrator | 2025-02-19 09:07:26.602420 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-02-19 09:07:26.602434 | orchestrator | Wednesday 19 February 2025 09:05:09 +0000 (0:00:00.461) 0:00:12.878 **** 2025-02-19 09:07:26.602449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-19 09:07:26.602466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-19 09:07:26.602480 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-19 09:07:26.602494 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.602508 | orchestrator | 2025-02-19 09:07:26.602522 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-02-19 09:07:26.602536 | orchestrator | Wednesday 19 February 2025 09:05:10 +0000 (0:00:00.865) 0:00:13.743 **** 2025-02-19 09:07:26.602551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-19 09:07:26.602572 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-19 09:07:26.602594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-19 09:07:26.602609 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.602623 | orchestrator | 2025-02-19 09:07:26.602637 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-02-19 09:07:26.602651 | orchestrator | Wednesday 19 February 2025 09:05:10 +0000 (0:00:00.193) 0:00:13.937 **** 2025-02-19 09:07:26.602667 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '503d814a20c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-02-19 09:05:07.851522', 'end': '2025-02-19 09:05:07.908180', 'delta': '0:00:00.056658', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['503d814a20c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-02-19 09:07:26.602696 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'e3cd9b49c1a8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-02-19 09:05:08.505281', 'end': '2025-02-19 09:05:08.557787', 'delta': '0:00:00.052506', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e3cd9b49c1a8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-02-19 09:07:26.602712 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '15bbdd8ce3da', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-02-19 09:05:09.156051', 'end': '2025-02-19 09:05:09.201241', 'delta': '0:00:00.045190', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['15bbdd8ce3da'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-02-19 09:07:26.602726 | orchestrator | 2025-02-19 09:07:26.602741 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-02-19 09:07:26.602755 | orchestrator | Wednesday 19 February 2025 09:05:11 +0000 (0:00:00.244) 0:00:14.182 **** 2025-02-19 09:07:26.602769 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.602783 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.602797 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.602811 | orchestrator | 2025-02-19 09:07:26.602825 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-02-19 09:07:26.602839 | orchestrator | Wednesday 19 February 2025 09:05:11 +0000 (0:00:00.590) 0:00:14.772 **** 2025-02-19 09:07:26.602853 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-02-19 09:07:26.602867 | orchestrator | 2025-02-19 09:07:26.602881 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-02-19 09:07:26.602902 | orchestrator | Wednesday 19 February 2025 09:05:13 +0000 (0:00:01.548) 0:00:16.321 **** 2025-02-19 09:07:26.602917 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.602930 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.602944 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.602958 | orchestrator | 2025-02-19 09:07:26.602972 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-02-19 09:07:26.602986 | orchestrator | Wednesday 19 February 2025 09:05:13 +0000 (0:00:00.642) 0:00:16.964 **** 2025-02-19 09:07:26.602999 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.603013 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.603027 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.603041 | orchestrator | 2025-02-19 09:07:26.603054 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-19 09:07:26.603068 | orchestrator | Wednesday 19 February 2025 09:05:14 +0000 (0:00:00.603) 0:00:17.567 **** 2025-02-19 09:07:26.603082 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.603095 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.603109 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.603123 | orchestrator | 2025-02-19 09:07:26.603136 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-02-19 09:07:26.603150 | orchestrator | Wednesday 19 February 2025 09:05:14 +0000 (0:00:00.348) 0:00:17.916 **** 2025-02-19 09:07:26.603164 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.603177 | orchestrator | 2025-02-19 09:07:26.603213 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-02-19 09:07:26.603228 | orchestrator | Wednesday 19 February 2025 09:05:15 +0000 (0:00:00.161) 0:00:18.077 **** 2025-02-19 09:07:26.603242 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.603256 | orchestrator | 2025-02-19 09:07:26.603270 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-19 09:07:26.603284 | orchestrator | Wednesday 19 February 2025 09:05:15 +0000 (0:00:00.250) 0:00:18.327 **** 2025-02-19 09:07:26.603297 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.603311 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.603331 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.603346 | orchestrator | 2025-02-19 09:07:26.603360 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-02-19 09:07:26.603374 | orchestrator | Wednesday 19 February 2025 09:05:15 +0000 (0:00:00.545) 0:00:18.873 **** 2025-02-19 09:07:26.603388 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.603402 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.603415 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.603429 | orchestrator | 2025-02-19 09:07:26.603443 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-02-19 09:07:26.603458 | orchestrator | Wednesday 19 February 2025 09:05:16 +0000 (0:00:00.452) 0:00:19.325 **** 2025-02-19 09:07:26.603472 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.603486 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.603500 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.603513 | orchestrator | 2025-02-19 09:07:26.603528 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-02-19 09:07:26.603542 | orchestrator | Wednesday 19 February 2025 09:05:16 +0000 (0:00:00.420) 0:00:19.746 **** 2025-02-19 09:07:26.603556 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.603570 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.603591 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.603606 | orchestrator | 2025-02-19 09:07:26.603620 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-02-19 09:07:26.603634 | orchestrator | Wednesday 19 February 2025 09:05:17 +0000 (0:00:00.398) 0:00:20.144 **** 2025-02-19 09:07:26.603648 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.603662 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.603676 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.603697 | orchestrator | 2025-02-19 09:07:26.603712 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-02-19 09:07:26.603726 | orchestrator | Wednesday 19 February 2025 09:05:17 +0000 (0:00:00.674) 0:00:20.819 **** 2025-02-19 09:07:26.603740 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.603754 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.603768 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.603782 | orchestrator | 2025-02-19 09:07:26.603795 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-02-19 09:07:26.603809 | orchestrator | Wednesday 19 February 2025 09:05:18 +0000 (0:00:00.387) 0:00:21.207 **** 2025-02-19 09:07:26.603823 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.603837 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.603851 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.603864 | orchestrator | 2025-02-19 09:07:26.603878 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-02-19 09:07:26.603892 | orchestrator | Wednesday 19 February 2025 09:05:18 +0000 (0:00:00.498) 0:00:21.705 **** 2025-02-19 09:07:26.603908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ffe4904--1899--5051--bec6--9b9e5f20cdb9-osd--block--3ffe4904--1899--5051--bec6--9b9e5f20cdb9', 'dm-uuid-LVM-gEJmrdxsi8tp7oi9IAUZfPfIca8NyMwBUandMV8FWSOsUmKZVrzNIrRAkdjGNneA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.603924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbf6aa6c--a724--5ce6--b507--3cef42d33bac-osd--block--bbf6aa6c--a724--5ce6--b507--3cef42d33bac', 'dm-uuid-LVM-DkF8lbRgUBw2OMYhZSYC3Mj76Auojemo6oME7Fny7y1DaH4u423Kt0pTeLvzlyID'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.603939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.603966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.603982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283', 'scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part1', 'scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part14', 'scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part15', 'scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part16', 'scsi-SQEMU_QEMU_HARDDISK_61e23fdb-a6df-4be1-bbd9-f5a1c4b8f283-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3ffe4904--1899--5051--bec6--9b9e5f20cdb9-osd--block--3ffe4904--1899--5051--bec6--9b9e5f20cdb9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iVQ8Cq-b326-HJuN-XZS4-gVgm-tX63-mBVLKz', 'scsi-0QEMU_QEMU_HARDDISK_0f115ae7-332f-47b5-bfba-4efd1297123a', 'scsi-SQEMU_QEMU_HARDDISK_0f115ae7-332f-47b5-bfba-4efd1297123a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bbf6aa6c--a724--5ce6--b507--3cef42d33bac-osd--block--bbf6aa6c--a724--5ce6--b507--3cef42d33bac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JJdVgF-70Dz-4p2K-3PLu-wSk9-VY4w-6NjjOp', 'scsi-0QEMU_QEMU_HARDDISK_7ac42676-4a1f-422d-9e47-87a492d5a795', 'scsi-SQEMU_QEMU_HARDDISK_7ac42676-4a1f-422d-9e47-87a492d5a795'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b50482d4-467d-4151-94c3-bb810c8ecc19', 'scsi-SQEMU_QEMU_HARDDISK_b50482d4-467d-4151-94c3-bb810c8ecc19'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--118242ed--6ea1--54c4--bfaa--1565dde441bc-osd--block--118242ed--6ea1--54c4--bfaa--1565dde441bc', 'dm-uuid-LVM-CtaUjsMi1CYgydkFoChOl7u11z3fZlUqiGHzn1OUxfSbVbGcKoMeSlSr1s4lBlXC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-19-08-06-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f77e8fc9--ceed--59c4--8328--4d335fb6ee54-osd--block--f77e8fc9--ceed--59c4--8328--4d335fb6ee54', 'dm-uuid-LVM-wWMRE2h8DeB3rvvyk4QGX6d1HblS3ppEYWzYMkr0qNhPWZkQRKJ6MBwDAXSwuCkw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604256 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.604277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--45b4b457--0c8f--5565--8330--30b761ce6399-osd--block--45b4b457--0c8f--5565--8330--30b761ce6399', 'dm-uuid-LVM-FIDBVmZvPJCVlKBWyBmTxCi7nOYfujidc1htUG46sWxF2dd6J4BIKBoMeJlsWT11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--185b0f4c--91cb--52bd--aac1--e01f69de71f3-osd--block--185b0f4c--91cb--52bd--aac1--e01f69de71f3', 'dm-uuid-LVM-b7fggpaB1M51uQSJQvACL6EVoRI0AC9FICBhcxsIn8K6v1Ar150fZTrHER4iS887'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6', 'scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_6c38e120-2a61-498a-a8ca-bc35055fc2f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--118242ed--6ea1--54c4--bfaa--1565dde441bc-osd--block--118242ed--6ea1--54c4--bfaa--1565dde441bc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UT9Bh1-4p8c-FCi0-Y3Pl-TrAL-cYTJ-PXESsV', 'scsi-0QEMU_QEMU_HARDDISK_923f2b44-0879-4277-a106-844be4b2565d', 'scsi-SQEMU_QEMU_HARDDISK_923f2b44-0879-4277-a106-844be4b2565d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-19 09:07:26.604600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb', 'scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part1', 'scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part14', 'scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part15', 'scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part16', 'scsi-SQEMU_QEMU_HARDDISK_c2f313e9-cec4-4f16-a2dd-db2bae446cdb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f77e8fc9--ceed--59c4--8328--4d335fb6ee54-osd--block--f77e8fc9--ceed--59c4--8328--4d335fb6ee54'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qfbbg0-AXbj-2CBj-qHc6-VGx9-C6V6-WvY0EJ', 'scsi-0QEMU_QEMU_HARDDISK_0c5208c8-9aa1-4e87-9cdb-910770e18a0c', 'scsi-SQEMU_QEMU_HARDDISK_0c5208c8-9aa1-4e87-9cdb-910770e18a0c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--45b4b457--0c8f--5565--8330--30b761ce6399-osd--block--45b4b457--0c8f--5565--8330--30b761ce6399'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9djOlR-QZOy-Dl1F-RkRk-saSA-evps-3BKngU', 'scsi-0QEMU_QEMU_HARDDISK_eb5d754e-727a-4983-9d71-2a65afff7a52', 'scsi-SQEMU_QEMU_HARDDISK_eb5d754e-727a-4983-9d71-2a65afff7a52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--185b0f4c--91cb--52bd--aac1--e01f69de71f3-osd--block--185b0f4c--91cb--52bd--aac1--e01f69de71f3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1vS8HN-ZjI9-2C0i-kDNF-EEJM-07Vg-ezf8O5', 'scsi-0QEMU_QEMU_HARDDISK_00a01370-945d-463a-a32d-5e52b5234eb4', 'scsi-SQEMU_QEMU_HARDDISK_00a01370-945d-463a-a32d-5e52b5234eb4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69806146-708c-4195-b6c7-ec061db9d03d', 'scsi-SQEMU_QEMU_HARDDISK_69806146-708c-4195-b6c7-ec061db9d03d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_933f95c9-b090-4d95-b9b7-90a087e62286', 'scsi-SQEMU_QEMU_HARDDISK_933f95c9-b090-4d95-b9b7-90a087e62286'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-19-08-06-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-19-08-06-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-19 09:07:26.604743 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.604757 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.604771 | orchestrator | 2025-02-19 09:07:26.604785 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-02-19 09:07:26.604799 | orchestrator | Wednesday 19 February 2025 09:05:19 +0000 (0:00:00.847) 0:00:22.553 **** 2025-02-19 09:07:26.604813 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-02-19 09:07:26.604826 | orchestrator | 2025-02-19 09:07:26.604840 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-02-19 09:07:26.604854 | orchestrator | Wednesday 19 February 2025 09:05:21 +0000 (0:00:01.703) 0:00:24.256 **** 2025-02-19 09:07:26.604868 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.604881 | orchestrator | 2025-02-19 09:07:26.604895 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-02-19 09:07:26.604909 | orchestrator | Wednesday 19 February 2025 09:05:21 +0000 (0:00:00.214) 0:00:24.470 **** 2025-02-19 09:07:26.604923 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.604936 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.604951 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.604964 | orchestrator | 2025-02-19 09:07:26.604978 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-02-19 09:07:26.604992 | orchestrator | Wednesday 19 February 2025 09:05:21 +0000 (0:00:00.527) 0:00:24.998 **** 2025-02-19 09:07:26.605005 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.605019 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.605033 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.605047 | orchestrator | 2025-02-19 09:07:26.605061 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-02-19 09:07:26.605075 | orchestrator | Wednesday 19 February 2025 09:05:22 +0000 (0:00:00.993) 0:00:25.992 **** 2025-02-19 09:07:26.605096 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.605110 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.605124 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.605138 | orchestrator | 2025-02-19 09:07:26.605152 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-19 09:07:26.605166 | orchestrator | Wednesday 19 February 2025 09:05:23 +0000 (0:00:00.516) 0:00:26.509 **** 2025-02-19 09:07:26.605179 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.605213 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.605227 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.605241 | orchestrator | 2025-02-19 09:07:26.605256 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-19 09:07:26.605269 | orchestrator | Wednesday 19 February 2025 09:05:24 +0000 (0:00:01.178) 0:00:27.687 **** 2025-02-19 09:07:26.605283 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.605298 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.605311 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.605325 | orchestrator | 2025-02-19 09:07:26.605339 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-19 09:07:26.605353 | orchestrator | Wednesday 19 February 2025 09:05:25 +0000 (0:00:00.437) 0:00:28.124 **** 2025-02-19 09:07:26.605367 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.605381 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.605395 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.605409 | orchestrator | 2025-02-19 09:07:26.605423 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-19 09:07:26.605437 | orchestrator | Wednesday 19 February 2025 09:05:25 +0000 (0:00:00.926) 0:00:29.051 **** 2025-02-19 09:07:26.605451 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.605465 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.605478 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.605492 | orchestrator | 2025-02-19 09:07:26.605506 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-02-19 09:07:26.605520 | orchestrator | Wednesday 19 February 2025 09:05:26 +0000 (0:00:00.473) 0:00:29.525 **** 2025-02-19 09:07:26.605533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-19 09:07:26.605548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-19 09:07:26.605562 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-19 09:07:26.605580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-19 09:07:26.605595 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.605609 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-19 09:07:26.605623 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-19 09:07:26.605636 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-19 09:07:26.605650 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-19 09:07:26.605664 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.605678 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-19 09:07:26.605692 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.605706 | orchestrator | 2025-02-19 09:07:26.605720 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-02-19 09:07:26.605739 | orchestrator | Wednesday 19 February 2025 09:05:27 +0000 (0:00:01.296) 0:00:30.821 **** 2025-02-19 09:07:26.605754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-19 09:07:26.605768 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-19 09:07:26.605782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-19 09:07:26.605796 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-19 09:07:26.605809 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-19 09:07:26.605823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-19 09:07:26.605844 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.605858 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-19 09:07:26.605872 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-19 09:07:26.605886 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.605900 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-19 09:07:26.605913 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.605927 | orchestrator | 2025-02-19 09:07:26.605941 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-02-19 09:07:26.605960 | orchestrator | Wednesday 19 February 2025 09:05:29 +0000 (0:00:01.348) 0:00:32.170 **** 2025-02-19 09:07:26.605975 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-02-19 09:07:26.605988 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-02-19 09:07:26.606002 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-02-19 09:07:26.606049 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-02-19 09:07:26.606066 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-02-19 09:07:26.606080 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-02-19 09:07:26.606094 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-02-19 09:07:26.606108 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-02-19 09:07:26.606122 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-02-19 09:07:26.606136 | orchestrator | 2025-02-19 09:07:26.606150 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-02-19 09:07:26.606164 | orchestrator | Wednesday 19 February 2025 09:05:31 +0000 (0:00:02.518) 0:00:34.688 **** 2025-02-19 09:07:26.606178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-19 09:07:26.606219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-19 09:07:26.606234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-19 09:07:26.606247 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.606272 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-19 09:07:26.606286 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-19 09:07:26.606300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-19 09:07:26.606314 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.606327 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-19 09:07:26.606341 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-19 09:07:26.606355 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-19 09:07:26.606369 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.606383 | orchestrator | 2025-02-19 09:07:26.606397 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-02-19 09:07:26.606411 | orchestrator | Wednesday 19 February 2025 09:05:32 +0000 (0:00:00.730) 0:00:35.418 **** 2025-02-19 09:07:26.606425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-19 09:07:26.606439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-19 09:07:26.606453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-19 09:07:26.606467 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-19 09:07:26.606481 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.606495 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-19 09:07:26.606508 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-19 09:07:26.606522 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.606536 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-19 09:07:26.606550 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-19 09:07:26.606564 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-19 09:07:26.606578 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.606599 | orchestrator | 2025-02-19 09:07:26.606613 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-02-19 09:07:26.606627 | orchestrator | Wednesday 19 February 2025 09:05:32 +0000 (0:00:00.438) 0:00:35.857 **** 2025-02-19 09:07:26.606641 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-19 09:07:26.606655 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-19 09:07:26.606670 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-19 09:07:26.606684 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-19 09:07:26.606698 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.606712 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-19 09:07:26.606733 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-19 09:07:26.606747 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.606762 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-19 09:07:26.606783 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-19 09:07:26.606797 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-19 09:07:26.606811 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.606825 | orchestrator | 2025-02-19 09:07:26.606838 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-02-19 09:07:26.606852 | orchestrator | Wednesday 19 February 2025 09:05:33 +0000 (0:00:00.592) 0:00:36.450 **** 2025-02-19 09:07:26.606866 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:07:26.606880 | orchestrator | 2025-02-19 09:07:26.606894 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-19 09:07:26.606908 | orchestrator | Wednesday 19 February 2025 09:05:34 +0000 (0:00:01.001) 0:00:37.451 **** 2025-02-19 09:07:26.606922 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.606936 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.606950 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.606963 | orchestrator | 2025-02-19 09:07:26.606977 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-19 09:07:26.606991 | orchestrator | Wednesday 19 February 2025 09:05:34 +0000 (0:00:00.396) 0:00:37.848 **** 2025-02-19 09:07:26.607004 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.607018 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.607032 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.607046 | orchestrator | 2025-02-19 09:07:26.607060 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-19 09:07:26.607073 | orchestrator | Wednesday 19 February 2025 09:05:35 +0000 (0:00:00.346) 0:00:38.194 **** 2025-02-19 09:07:26.607087 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.607101 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.607114 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.607128 | orchestrator | 2025-02-19 09:07:26.607142 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-19 09:07:26.607156 | orchestrator | Wednesday 19 February 2025 09:05:35 +0000 (0:00:00.430) 0:00:38.625 **** 2025-02-19 09:07:26.607169 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.607202 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.607216 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.607230 | orchestrator | 2025-02-19 09:07:26.607244 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-19 09:07:26.607258 | orchestrator | Wednesday 19 February 2025 09:05:36 +0000 (0:00:00.776) 0:00:39.402 **** 2025-02-19 09:07:26.607279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:07:26.607293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:07:26.607307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:07:26.607321 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.607336 | orchestrator | 2025-02-19 09:07:26.607350 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-19 09:07:26.607363 | orchestrator | Wednesday 19 February 2025 09:05:36 +0000 (0:00:00.409) 0:00:39.812 **** 2025-02-19 09:07:26.607377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:07:26.607396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:07:26.607410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:07:26.607425 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.607439 | orchestrator | 2025-02-19 09:07:26.607453 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-19 09:07:26.607466 | orchestrator | Wednesday 19 February 2025 09:05:37 +0000 (0:00:00.519) 0:00:40.332 **** 2025-02-19 09:07:26.607480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:07:26.607494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:07:26.607508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:07:26.607522 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.607536 | orchestrator | 2025-02-19 09:07:26.607550 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:07:26.607564 | orchestrator | Wednesday 19 February 2025 09:05:37 +0000 (0:00:00.499) 0:00:40.831 **** 2025-02-19 09:07:26.607578 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:07:26.607591 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:07:26.607605 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:07:26.607619 | orchestrator | 2025-02-19 09:07:26.607633 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-19 09:07:26.607647 | orchestrator | Wednesday 19 February 2025 09:05:38 +0000 (0:00:00.576) 0:00:41.408 **** 2025-02-19 09:07:26.607661 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-19 09:07:26.607675 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-19 09:07:26.607689 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-19 09:07:26.607703 | orchestrator | 2025-02-19 09:07:26.607716 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-19 09:07:26.607731 | orchestrator | Wednesday 19 February 2025 09:05:39 +0000 (0:00:01.433) 0:00:42.842 **** 2025-02-19 09:07:26.607744 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.607758 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.607772 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.607786 | orchestrator | 2025-02-19 09:07:26.607800 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-19 09:07:26.607814 | orchestrator | Wednesday 19 February 2025 09:05:40 +0000 (0:00:00.333) 0:00:43.176 **** 2025-02-19 09:07:26.607828 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.607842 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.607855 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.607869 | orchestrator | 2025-02-19 09:07:26.607883 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-19 09:07:26.607902 | orchestrator | Wednesday 19 February 2025 09:05:40 +0000 (0:00:00.409) 0:00:43.585 **** 2025-02-19 09:07:26.607917 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-19 09:07:26.607936 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.607950 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-19 09:07:26.607964 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.607978 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-19 09:07:26.607992 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.608015 | orchestrator | 2025-02-19 09:07:26.608029 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-19 09:07:26.608043 | orchestrator | Wednesday 19 February 2025 09:05:41 +0000 (0:00:00.609) 0:00:44.195 **** 2025-02-19 09:07:26.608057 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-19 09:07:26.608071 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.608084 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-19 09:07:26.608099 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.608113 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-19 09:07:26.608127 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.608140 | orchestrator | 2025-02-19 09:07:26.608154 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-19 09:07:26.608168 | orchestrator | Wednesday 19 February 2025 09:05:41 +0000 (0:00:00.569) 0:00:44.764 **** 2025-02-19 09:07:26.608199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-19 09:07:26.608214 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-19 09:07:26.608227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-19 09:07:26.608241 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-19 09:07:26.608255 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-19 09:07:26.608268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-19 09:07:26.608282 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.608296 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-19 09:07:26.608310 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.608324 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-19 09:07:26.608338 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-19 09:07:26.608351 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.608366 | orchestrator | 2025-02-19 09:07:26.608379 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-02-19 09:07:26.608394 | orchestrator | Wednesday 19 February 2025 09:05:42 +0000 (0:00:00.823) 0:00:45.587 **** 2025-02-19 09:07:26.608407 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.608422 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.608436 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:07:26.608449 | orchestrator | 2025-02-19 09:07:26.608463 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-02-19 09:07:26.608477 | orchestrator | Wednesday 19 February 2025 09:05:42 +0000 (0:00:00.352) 0:00:45.940 **** 2025-02-19 09:07:26.608491 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-19 09:07:26.608505 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-19 09:07:26.608519 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-19 09:07:26.608533 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-19 09:07:26.608547 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-19 09:07:26.608561 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-19 09:07:26.608575 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-19 09:07:26.608588 | orchestrator | 2025-02-19 09:07:26.608602 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-02-19 09:07:26.608616 | orchestrator | Wednesday 19 February 2025 09:05:44 +0000 (0:00:01.437) 0:00:47.378 **** 2025-02-19 09:07:26.608630 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-19 09:07:26.608651 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-19 09:07:26.608665 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-19 09:07:26.608679 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-19 09:07:26.608693 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-19 09:07:26.608707 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-19 09:07:26.608721 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-19 09:07:26.608735 | orchestrator | 2025-02-19 09:07:26.608749 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-02-19 09:07:26.608763 | orchestrator | Wednesday 19 February 2025 09:05:46 +0000 (0:00:02.605) 0:00:49.983 **** 2025-02-19 09:07:26.608777 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:07:26.608791 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:07:26.608805 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-02-19 09:07:26.608819 | orchestrator | 2025-02-19 09:07:26.608833 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-02-19 09:07:26.608852 | orchestrator | Wednesday 19 February 2025 09:05:47 +0000 (0:00:00.741) 0:00:50.724 **** 2025-02-19 09:07:26.608866 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-19 09:07:26.608883 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-19 09:07:26.608898 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-19 09:07:26.608912 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-19 09:07:26.608927 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-19 09:07:26.608941 | orchestrator | 2025-02-19 09:07:26.608955 | orchestrator | TASK [generate keys] *********************************************************** 2025-02-19 09:07:26.608969 | orchestrator | Wednesday 19 February 2025 09:06:30 +0000 (0:00:42.941) 0:01:33.665 **** 2025-02-19 09:07:26.608983 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.608997 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609011 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609025 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609044 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609058 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609072 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-02-19 09:07:26.609094 | orchestrator | 2025-02-19 09:07:26.609108 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-02-19 09:07:26.609122 | orchestrator | Wednesday 19 February 2025 09:06:54 +0000 (0:00:23.512) 0:01:57.178 **** 2025-02-19 09:07:26.609136 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609150 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609164 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609178 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609211 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609225 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609239 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-19 09:07:26.609253 | orchestrator | 2025-02-19 09:07:26.609266 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-02-19 09:07:26.609280 | orchestrator | Wednesday 19 February 2025 09:07:05 +0000 (0:00:11.600) 0:02:08.778 **** 2025-02-19 09:07:26.609294 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609308 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-19 09:07:26.609321 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-19 09:07:26.609335 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609349 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-19 09:07:26.609363 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-19 09:07:26.609376 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609390 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-19 09:07:26.609404 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-19 09:07:26.609417 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:26.609431 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-19 09:07:26.609450 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-19 09:07:29.647442 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:29.647569 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-19 09:07:29.647588 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-19 09:07:29.647602 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-19 09:07:29.647616 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-19 09:07:29.647630 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-19 09:07:29.647645 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-02-19 09:07:29.647659 | orchestrator | 2025-02-19 09:07:29.647673 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:07:29.647690 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-02-19 09:07:29.647706 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-02-19 09:07:29.647720 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-02-19 09:07:29.647777 | orchestrator | 2025-02-19 09:07:29.647804 | orchestrator | 2025-02-19 09:07:29.647818 | orchestrator | 2025-02-19 09:07:29.647832 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:07:29.647846 | orchestrator | Wednesday 19 February 2025 09:07:24 +0000 (0:00:18.802) 0:02:27.581 **** 2025-02-19 09:07:29.647860 | orchestrator | =============================================================================== 2025-02-19 09:07:29.647874 | orchestrator | create openstack pool(s) ----------------------------------------------- 42.94s 2025-02-19 09:07:29.647888 | orchestrator | generate keys ---------------------------------------------------------- 23.51s 2025-02-19 09:07:29.647902 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.80s 2025-02-19 09:07:29.647916 | orchestrator | get keys from monitors ------------------------------------------------- 11.60s 2025-02-19 09:07:29.647930 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 2.61s 2025-02-19 09:07:29.647943 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.60s 2025-02-19 09:07:29.647971 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 2.52s 2025-02-19 09:07:29.647997 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.70s 2025-02-19 09:07:29.648014 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.55s 2025-02-19 09:07:29.648030 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.44s 2025-02-19 09:07:29.648046 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 1.43s 2025-02-19 09:07:29.648062 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 1.35s 2025-02-19 09:07:29.648077 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.30s 2025-02-19 09:07:29.648095 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 1.18s 2025-02-19 09:07:29.648111 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 1.00s 2025-02-19 09:07:29.648126 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.99s 2025-02-19 09:07:29.648142 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.96s 2025-02-19 09:07:29.648158 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.93s 2025-02-19 09:07:29.648174 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.93s 2025-02-19 09:07:29.648244 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.87s 2025-02-19 09:07:29.648262 | orchestrator | 2025-02-19 09:07:26 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:29.648279 | orchestrator | 2025-02-19 09:07:26 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:29.648315 | orchestrator | 2025-02-19 09:07:29 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:29.649877 | orchestrator | 2025-02-19 09:07:29 | INFO  | Task cce779f2-7eea-4fba-87f7-27c8fd4ad1af is in state STARTED 2025-02-19 09:07:29.651018 | orchestrator | 2025-02-19 09:07:29 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:32.688358 | orchestrator | 2025-02-19 09:07:29 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:32.688501 | orchestrator | 2025-02-19 09:07:32 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:32.689754 | orchestrator | 2025-02-19 09:07:32 | INFO  | Task cce779f2-7eea-4fba-87f7-27c8fd4ad1af is in state STARTED 2025-02-19 09:07:32.692631 | orchestrator | 2025-02-19 09:07:32 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:35.736125 | orchestrator | 2025-02-19 09:07:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:35.736328 | orchestrator | 2025-02-19 09:07:35 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:35.737726 | orchestrator | 2025-02-19 09:07:35 | INFO  | Task cce779f2-7eea-4fba-87f7-27c8fd4ad1af is in state STARTED 2025-02-19 09:07:35.739168 | orchestrator | 2025-02-19 09:07:35 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:38.789978 | orchestrator | 2025-02-19 09:07:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:38.790241 | orchestrator | 2025-02-19 09:07:38 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:38.791488 | orchestrator | 2025-02-19 09:07:38 | INFO  | Task cce779f2-7eea-4fba-87f7-27c8fd4ad1af is in state STARTED 2025-02-19 09:07:38.793239 | orchestrator | 2025-02-19 09:07:38 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:41.835478 | orchestrator | 2025-02-19 09:07:38 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:41.835633 | orchestrator | 2025-02-19 09:07:41 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:41.837154 | orchestrator | 2025-02-19 09:07:41.837236 | orchestrator | 2025-02-19 09:07:41.837254 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-02-19 09:07:41.837269 | orchestrator | 2025-02-19 09:07:41.837284 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-02-19 09:07:41.837299 | orchestrator | Wednesday 19 February 2025 09:07:29 +0000 (0:00:00.203) 0:00:00.203 **** 2025-02-19 09:07:41.837313 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-02-19 09:07:41.837328 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-02-19 09:07:41.837342 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-02-19 09:07:41.837356 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-02-19 09:07:41.837370 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-02-19 09:07:41.837383 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-02-19 09:07:41.837397 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-02-19 09:07:41.837411 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-02-19 09:07:41.837425 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-02-19 09:07:41.837438 | orchestrator | 2025-02-19 09:07:41.837452 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-02-19 09:07:41.837484 | orchestrator | Wednesday 19 February 2025 09:07:33 +0000 (0:00:04.601) 0:00:04.805 **** 2025-02-19 09:07:41.837502 | orchestrator | failed: [testbed-manager -> localhost] (item=ceph.client.admin.keyring) => {"ansible_loop_var": "item", "changed": false, "checksum": "eeecedfff654fc76688cad1442f3f6d3fce140d6", "item": {"ansible_loop_var": "item", "changed": false, "content": "W2NsaWVudC5hZG1pbl0KCWtleSA9IEFRQ3BuTFZuNGthaElSQUFNdGlMd0l3bHd1dEFZNXJ3ZnNScjRRPT0KCWNhcHMgbWRzID0gImFsbG93ICoiCgljYXBzIG1nciA9ICJhbGxvdyAqIgoJY2FwcyBtb24gPSAiYWxsb3cgKiIKCWNhcHMgb3NkID0gImFsbG93ICoiCg==", "encoding": "base64", "failed": false, "invocation": {"module_args": {"src": "/etc/ceph/ceph.client.admin.keyring"}}, "item": {"dest": "/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring", "src": "ceph.client.admin.keyring"}, "source": "/etc/ceph/ceph.client.admin.keyring"}, "msg": "Destination directory /share/11111111-1111-1111-1111-111111111111/etc/ceph does not exist"} 2025-02-19 09:07:41.837521 | orchestrator | failed: [testbed-manager -> localhost] (item=ceph.client.cinder.keyring) => {"ansible_loop_var": "item", "changed": false, "checksum": "b0ca0f07168e8209976325efef7f259e63102d2d", "item": {"ansible_loop_var": "item", "changed": false, "content": "W2NsaWVudC5jaW5kZXJdCglrZXkgPSBBUUFjbjdWbkFBQUFBQkFBMHVyZ0ZTZStBNHVCNUFLVkIrLzRTUT09CgljYXBzIG1vbiA9ICJwcm9maWxlIHJiZCIKCWNhcHMgb3NkID0gInByb2ZpbGUgcmJkIHBvb2w9dm9sdW1lcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzIgo=", "encoding": "base64", "failed": false, "invocation": {"module_args": {"src": "/etc/ceph/ceph.client.cinder.keyring"}}, "item": {"dest": "/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring", "src": "ceph.client.cinder.keyring"}, "source": "/etc/ceph/ceph.client.cinder.keyring"}, "msg": "Destination directory /share/11111111-1111-1111-1111-111111111111/etc/ceph does not exist"} 2025-02-19 09:07:41.837560 | orchestrator | failed: [testbed-manager -> localhost] (item=ceph.client.cinder.keyring) => {"ansible_loop_var": "item", "changed": false, "checksum": "b0ca0f07168e8209976325efef7f259e63102d2d", "item": {"ansible_loop_var": "item", "changed": false, "content": "W2NsaWVudC5jaW5kZXJdCglrZXkgPSBBUUFjbjdWbkFBQUFBQkFBMHVyZ0ZTZStBNHVCNUFLVkIrLzRTUT09CgljYXBzIG1vbiA9ICJwcm9maWxlIHJiZCIKCWNhcHMgb3NkID0gInByb2ZpbGUgcmJkIHBvb2w9dm9sdW1lcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzIgo=", "encoding": "base64", "failed": false, "invocation": {"module_args": {"src": "/etc/ceph/ceph.client.cinder.keyring"}}, "item": {"dest": "/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring", "src": "ceph.client.cinder.keyring"}, "source": "/etc/ceph/ceph.client.cinder.keyring"}, "msg": "Destination directory /share/11111111-1111-1111-1111-111111111111/etc/ceph does not exist"} 2025-02-19 09:07:41.837588 | orchestrator | failed: [testbed-manager -> localhost] (item=ceph.client.cinder-backup.keyring) => {"ansible_loop_var": "item", "changed": false, "checksum": "f0913cd742c5d4c5fabc28978acdee205dc1cf5f", "item": {"ansible_loop_var": "item", "changed": false, "content": "W2NsaWVudC5jaW5kZXItYmFja3VwXQoJa2V5ID0gQVFBWW43Vm5BQUFBQUJBQWloSE5PUlc0RUNKc1RTa0tGWm1kN2c9PQoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPWJhY2t1cHMiCg==", "encoding": "base64", "failed": false, "invocation": {"module_args": {"src": "/etc/ceph/ceph.client.cinder-backup.keyring"}}, "item": {"dest": "/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring", "src": "ceph.client.cinder-backup.keyring"}, "source": "/etc/ceph/ceph.client.cinder-backup.keyring"}, "msg": "Destination directory /share/11111111-1111-1111-1111-111111111111/etc/ceph does not exist"} 2025-02-19 09:07:41.837604 | orchestrator | failed: [testbed-manager -> localhost] (item=ceph.client.cinder.keyring) => {"ansible_loop_var": "item", "changed": false, "checksum": "b0ca0f07168e8209976325efef7f259e63102d2d", "item": {"ansible_loop_var": "item", "changed": false, "content": "W2NsaWVudC5jaW5kZXJdCglrZXkgPSBBUUFjbjdWbkFBQUFBQkFBMHVyZ0ZTZStBNHVCNUFLVkIrLzRTUT09CgljYXBzIG1vbiA9ICJwcm9maWxlIHJiZCIKCWNhcHMgb3NkID0gInByb2ZpbGUgcmJkIHBvb2w9dm9sdW1lcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9aW1hZ2VzIgo=", "encoding": "base64", "failed": false, "invocation": {"module_args": {"src": "/etc/ceph/ceph.client.cinder.keyring"}}, "item": {"dest": "/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring", "src": "ceph.client.cinder.keyring"}, "source": "/etc/ceph/ceph.client.cinder.keyring"}, "msg": "Destination directory /share/11111111-1111-1111-1111-111111111111/etc/ceph does not exist"} 2025-02-19 09:07:41.837619 | orchestrator | failed: [testbed-manager -> localhost] (item=ceph.client.nova.keyring) => {"ansible_loop_var": "item", "changed": false, "checksum": "96a971e670abf4b4dc999516bf18ec3bc2d34486", "item": {"ansible_loop_var": "item", "changed": false, "content": "W2NsaWVudC5ub3ZhXQoJa2V5ID0gQVFBbm43Vm5BQUFBQUJBQTNFTDJ3MWkzUEpMRzNWS1dTRkREMkE9PQoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPWltYWdlcywgcHJvZmlsZSByYmQgcG9vbD12bXMsIHByb2ZpbGUgcmJkIHBvb2w9dm9sdW1lcywgcHJvZmlsZSByYmQgcG9vbD1iYWNrdXBzIgo=", "encoding": "base64", "failed": false, "invocation": {"module_args": {"src": "/etc/ceph/ceph.client.nova.keyring"}}, "item": {"dest": "/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring", "src": "ceph.client.nova.keyring"}, "source": "/etc/ceph/ceph.client.nova.keyring"}, "msg": "Destination directory /share/11111111-1111-1111-1111-111111111111/etc/ceph does not exist"} 2025-02-19 09:07:41.837641 | orchestrator | failed: [testbed-manager -> localhost] (item=ceph.client.glance.keyring) => {"ansible_loop_var": "item", "changed": false, "checksum": "e93d9befddda7bd8437ba87b61b9e0a12b83a93e", "item": {"ansible_loop_var": "item", "changed": false, "content": "W2NsaWVudC5nbGFuY2VdCglrZXkgPSBBUUFnbjdWbkFBQUFBQkFBQ0JkbXhFbDFEeGVFWkRKWTh6WE9Udz09CgljYXBzIG1vbiA9ICJwcm9maWxlIHJiZCIKCWNhcHMgb3NkID0gInByb2ZpbGUgcmJkIHBvb2w9dm9sdW1lcywgcHJvZmlsZSByYmQgcG9vbD1pbWFnZXMiCg==", "encoding": "base64", "failed": false, "invocation": {"module_args": {"src": "/etc/ceph/ceph.client.glance.keyring"}}, "item": {"dest": "/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring", "src": "ceph.client.glance.keyring"}, "source": "/etc/ceph/ceph.client.glance.keyring"}, "msg": "Destination directory /share/11111111-1111-1111-1111-111111111111/etc/ceph does not exist"} 2025-02-19 09:07:41.837665 | orchestrator | failed: [testbed-manager -> localhost] (item=ceph.client.gnocchi.keyring) => {"ansible_loop_var": "item", "changed": false, "checksum": "dfda0343be0bf5c6a793d789a6b48c62c39fa01f", "item": {"ansible_loop_var": "item", "changed": false, "content": "W2NsaWVudC5nbm9jY2hpXQoJa2V5ID0gQVFBam43Vm5BQUFBQUJBQWhZZkh1R01FNTVvd2ZjRjNrdTFoR3c9PQoJY2FwcyBtb24gPSAicHJvZmlsZSByYmQiCgljYXBzIG9zZCA9ICJwcm9maWxlIHJiZCBwb29sPW1ldHJpY3MiCg==", "encoding": "base64", "failed": false, "invocation": {"module_args": {"src": "/etc/ceph/ceph.client.gnocchi.keyring"}}, "item": {"dest": "/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring", "src": "ceph.client.gnocchi.keyring"}, "source": "/etc/ceph/ceph.client.gnocchi.keyring"}, "msg": "Destination directory /share/11111111-1111-1111-1111-111111111111/etc/ceph does not exist"} 2025-02-19 09:07:41.837976 | orchestrator | failed: [testbed-manager -> localhost] (item=ceph.client.manila.keyring) => {"ansible_loop_var": "item", "changed": false, "checksum": "38f924b6fcb10d1507e36cce435b8f2d2cfa2618", "item": {"ansible_loop_var": "item", "changed": false, "content": "W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUFybjdWbkFBQUFBQkFBM2F6bkgvQUpFd2F4QzhnUEFrYTY2Zz09CgljYXBzIG1nciA9ICJhbGxvdyBydyIKCWNhcHMgbW9uID0gImFsbG93IHIiCgljYXBzIG9zZCA9ICJhbGxvdyBydyBwb29sPWNlcGhmc19kYXRhIgo=", "encoding": "base64", "failed": false, "invocation": {"module_args": {"src": "/etc/ceph/ceph.client.manila.keyring"}}, "item": {"dest": "/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring", "src": "ceph.client.manila.keyring"}, "source": "/etc/ceph/ceph.client.manila.keyring"}, "msg": "Destination directory /share/11111111-1111-1111-1111-111111111111/etc/ceph does not exist"} 2025-02-19 09:07:41.838063 | orchestrator | 2025-02-19 09:07:41.838083 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:07:41.838098 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-19 09:07:41.838113 | orchestrator | 2025-02-19 09:07:41.838127 | orchestrator | 2025-02-19 09:07:41.838141 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:07:41.838155 | orchestrator | Wednesday 19 February 2025 09:07:41 +0000 (0:00:07.476) 0:00:12.281 **** 2025-02-19 09:07:41.838169 | orchestrator | =============================================================================== 2025-02-19 09:07:41.838239 | orchestrator | Write ceph keys to the share directory ---------------------------------- 7.48s 2025-02-19 09:07:41.838256 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.60s 2025-02-19 09:07:41.838284 | orchestrator | 2025-02-19 09:07:41 | INFO  | Task cce779f2-7eea-4fba-87f7-27c8fd4ad1af is in state SUCCESS 2025-02-19 09:07:41.838312 | orchestrator | 2025-02-19 09:07:41 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:44.883936 | orchestrator | 2025-02-19 09:07:41 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:44.884122 | orchestrator | 2025-02-19 09:07:44 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:44.884763 | orchestrator | 2025-02-19 09:07:44 | INFO  | Task a8b9f944-da9c-4ff0-aa73-f0de251a51bf is in state STARTED 2025-02-19 09:07:44.885917 | orchestrator | 2025-02-19 09:07:44 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:44.886221 | orchestrator | 2025-02-19 09:07:44 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:47.936586 | orchestrator | 2025-02-19 09:07:47 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:47.937278 | orchestrator | 2025-02-19 09:07:47 | INFO  | Task a8b9f944-da9c-4ff0-aa73-f0de251a51bf is in state STARTED 2025-02-19 09:07:47.938268 | orchestrator | 2025-02-19 09:07:47 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:50.992307 | orchestrator | 2025-02-19 09:07:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:50.992476 | orchestrator | 2025-02-19 09:07:50 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:50.993789 | orchestrator | 2025-02-19 09:07:50 | INFO  | Task bf65a5cd-6e4e-4c7a-8434-40347c5d23d9 is in state STARTED 2025-02-19 09:07:50.993862 | orchestrator | 2025-02-19 09:07:50 | INFO  | Task a8b9f944-da9c-4ff0-aa73-f0de251a51bf is in state SUCCESS 2025-02-19 09:07:50.999648 | orchestrator | 2025-02-19 09:07:50 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:51.003559 | orchestrator | 2025-02-19 09:07:50 | INFO  | Task 556ccdba-434f-4b06-bfc4-19aad2d79a96 is in state STARTED 2025-02-19 09:07:51.003656 | orchestrator | 2025-02-19 09:07:51 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:07:54.068811 | orchestrator | 2025-02-19 09:07:51 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:54.068961 | orchestrator | 2025-02-19 09:07:54 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:54.069367 | orchestrator | 2025-02-19 09:07:54 | INFO  | Task bf65a5cd-6e4e-4c7a-8434-40347c5d23d9 is in state STARTED 2025-02-19 09:07:54.069412 | orchestrator | 2025-02-19 09:07:54 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:54.070238 | orchestrator | 2025-02-19 09:07:54 | INFO  | Task 556ccdba-434f-4b06-bfc4-19aad2d79a96 is in state STARTED 2025-02-19 09:07:54.071569 | orchestrator | 2025-02-19 09:07:54 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:07:57.111762 | orchestrator | 2025-02-19 09:07:54 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:07:57.111927 | orchestrator | 2025-02-19 09:07:57 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:07:57.112704 | orchestrator | 2025-02-19 09:07:57 | INFO  | Task bf65a5cd-6e4e-4c7a-8434-40347c5d23d9 is in state SUCCESS 2025-02-19 09:07:57.112819 | orchestrator | 2025-02-19 09:07:57 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:07:57.116647 | orchestrator | 2025-02-19 09:07:57 | INFO  | Task 556ccdba-434f-4b06-bfc4-19aad2d79a96 is in state SUCCESS 2025-02-19 09:07:57.140322 | orchestrator | 2025-02-19 09:07:57 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:00.177909 | orchestrator | 2025-02-19 09:07:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:00.178008 | orchestrator | 2025-02-19 09:08:00 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:08:00.178270 | orchestrator | 2025-02-19 09:08:00 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:00.179082 | orchestrator | 2025-02-19 09:08:00 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:00.184683 | orchestrator | 2025-02-19 09:08:00 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:03.231784 | orchestrator | 2025-02-19 09:08:00 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:03.231896 | orchestrator | 2025-02-19 09:08:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:03.231921 | orchestrator | 2025-02-19 09:08:03 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state STARTED 2025-02-19 09:08:03.233798 | orchestrator | 2025-02-19 09:08:03 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:03.233815 | orchestrator | 2025-02-19 09:08:03 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:03.233824 | orchestrator | 2025-02-19 09:08:03 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:03.233836 | orchestrator | 2025-02-19 09:08:03 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:06.277294 | orchestrator | 2025-02-19 09:08:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:06.277645 | orchestrator | 2025-02-19 09:08:06 | INFO  | Task d6469075-a28f-4f95-8710-1e71fdf6a4c9 is in state SUCCESS 2025-02-19 09:08:06.278665 | orchestrator | 2025-02-19 09:08:06.279674 | orchestrator | 2025-02-19 09:08:06.279700 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-02-19 09:08:06.279715 | orchestrator | 2025-02-19 09:08:06.279730 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-02-19 09:08:06.279745 | orchestrator | Wednesday 19 February 2025 09:07:45 +0000 (0:00:00.218) 0:00:00.218 **** 2025-02-19 09:08:06.279760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-02-19 09:08:06.279776 | orchestrator | 2025-02-19 09:08:06.279790 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-02-19 09:08:06.279804 | orchestrator | Wednesday 19 February 2025 09:07:45 +0000 (0:00:00.257) 0:00:00.476 **** 2025-02-19 09:08:06.279820 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-02-19 09:08:06.279834 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-02-19 09:08:06.279849 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-02-19 09:08:06.279863 | orchestrator | 2025-02-19 09:08:06.279877 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-02-19 09:08:06.279892 | orchestrator | Wednesday 19 February 2025 09:07:47 +0000 (0:00:01.459) 0:00:01.935 **** 2025-02-19 09:08:06.279906 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-02-19 09:08:06.279949 | orchestrator | 2025-02-19 09:08:06.279964 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-02-19 09:08:06.279979 | orchestrator | Wednesday 19 February 2025 09:07:48 +0000 (0:00:01.281) 0:00:03.217 **** 2025-02-19 09:08:06.279996 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleError: An unhandled exception occurred while templating '{{ lookup('file', '{{ configuration_directory }}/environments/infrastructure/files/ceph/ceph.client.admin.keyring', rstrip=false) | default('', true) }}'. Error was a , original message: The 'file' lookup had an issue accessing the file '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'. file not found, use -vvvvv to see paths searched 2025-02-19 09:08:06.280105 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": false, "msg": "AnsibleError: An unhandled exception occurred while templating '{{ lookup('file', '{{ configuration_directory }}/environments/infrastructure/files/ceph/ceph.client.admin.keyring', rstrip=false) | default('', true) }}'. Error was a , original message: The 'file' lookup had an issue accessing the file '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'. file not found, use -vvvvv to see paths searched"} 2025-02-19 09:08:06.280125 | orchestrator | 2025-02-19 09:08:06.280140 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:08:06.280154 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-19 09:08:06.280170 | orchestrator | 2025-02-19 09:08:06.280184 | orchestrator | 2025-02-19 09:08:06.280234 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:08:06.280253 | orchestrator | Wednesday 19 February 2025 09:07:48 +0000 (0:00:00.176) 0:00:03.394 **** 2025-02-19 09:08:06.280269 | orchestrator | =============================================================================== 2025-02-19 09:08:06.280285 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.46s 2025-02-19 09:08:06.280300 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.28s 2025-02-19 09:08:06.280316 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2025-02-19 09:08:06.280365 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.18s 2025-02-19 09:08:06.280383 | orchestrator | 2025-02-19 09:08:06.280449 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-19 09:08:06.280467 | orchestrator | 2025-02-19 09:08:06.280484 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-02-19 09:08:06.280499 | orchestrator | 2025-02-19 09:08:06.280515 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-02-19 09:08:06.280531 | orchestrator | Wednesday 19 February 2025 09:07:54 +0000 (0:00:01.254) 0:00:01.254 **** 2025-02-19 09:08:06.280633 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": false, "cmd": "ceph mgr module disable dashboard", "msg": "[Errno 2] No such file or directory: b'ceph'", "rc": 2, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} 2025-02-19 09:08:06.280655 | orchestrator | 2025-02-19 09:08:06.280669 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:08:06.280685 | orchestrator | testbed-manager : ok=0 changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-19 09:08:06.280699 | orchestrator | 2025-02-19 09:08:06.280713 | orchestrator | 2025-02-19 09:08:06.280727 | orchestrator | 2025-02-19 09:08:06.280741 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:08:06.280796 | orchestrator | Wednesday 19 February 2025 09:07:55 +0000 (0:00:00.523) 0:00:01.778 **** 2025-02-19 09:08:06.280814 | orchestrator | =============================================================================== 2025-02-19 09:08:06.280828 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 0.52s 2025-02-19 09:08:06.280842 | orchestrator | 2025-02-19 09:08:06.280856 | orchestrator | 2025-02-19 09:08:06.280870 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:08:06.280896 | orchestrator | 2025-02-19 09:08:06.280911 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:08:06.280925 | orchestrator | Wednesday 19 February 2025 09:07:53 +0000 (0:00:00.466) 0:00:00.466 **** 2025-02-19 09:08:06.280939 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:08:06.280954 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:08:06.280969 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:08:06.281001 | orchestrator | 2025-02-19 09:08:06.281015 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:08:06.281034 | orchestrator | Wednesday 19 February 2025 09:07:54 +0000 (0:00:00.903) 0:00:01.369 **** 2025-02-19 09:08:06.281049 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-02-19 09:08:06.281063 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-02-19 09:08:06.281077 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-02-19 09:08:06.281091 | orchestrator | 2025-02-19 09:08:06.281106 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-02-19 09:08:06.281119 | orchestrator | 2025-02-19 09:08:06.281134 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-02-19 09:08:06.281148 | orchestrator | Wednesday 19 February 2025 09:07:55 +0000 (0:00:01.140) 0:00:02.509 **** 2025-02-19 09:08:06.281162 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:08:06.281177 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:08:06.281192 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:08:06.281288 | orchestrator | 2025-02-19 09:08:06.281306 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:08:06.281322 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:08:06.281338 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:08:06.281355 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:08:06.281371 | orchestrator | 2025-02-19 09:08:06.281387 | orchestrator | 2025-02-19 09:08:06.281403 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:08:06.281419 | orchestrator | Wednesday 19 February 2025 09:07:56 +0000 (0:00:01.152) 0:00:03.661 **** 2025-02-19 09:08:06.281435 | orchestrator | =============================================================================== 2025-02-19 09:08:06.281451 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.15s 2025-02-19 09:08:06.281467 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.14s 2025-02-19 09:08:06.281484 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2025-02-19 09:08:06.281499 | orchestrator | 2025-02-19 09:08:06.281515 | orchestrator | 2025-02-19 09:08:06.281532 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:08:06.281548 | orchestrator | 2025-02-19 09:08:06.281564 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:08:06.281580 | orchestrator | Wednesday 19 February 2025 09:05:01 +0000 (0:00:00.388) 0:00:00.388 **** 2025-02-19 09:08:06.281595 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:08:06.281610 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:08:06.281624 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:08:06.281639 | orchestrator | 2025-02-19 09:08:06.281653 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:08:06.281666 | orchestrator | Wednesday 19 February 2025 09:05:02 +0000 (0:00:00.480) 0:00:00.868 **** 2025-02-19 09:08:06.281678 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-02-19 09:08:06.281691 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-02-19 09:08:06.281703 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-02-19 09:08:06.281716 | orchestrator | 2025-02-19 09:08:06.281728 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-02-19 09:08:06.281747 | orchestrator | 2025-02-19 09:08:06.281760 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-19 09:08:06.281774 | orchestrator | Wednesday 19 February 2025 09:05:02 +0000 (0:00:00.322) 0:00:01.191 **** 2025-02-19 09:08:06.281788 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:08:06.281801 | orchestrator | 2025-02-19 09:08:06.281813 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-02-19 09:08:06.281825 | orchestrator | Wednesday 19 February 2025 09:05:04 +0000 (0:00:01.835) 0:00:03.026 **** 2025-02-19 09:08:06.281880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.281900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.281916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.281930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.281952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.281997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.282012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.282061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.282075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.282088 | orchestrator | 2025-02-19 09:08:06.282101 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-02-19 09:08:06.282113 | orchestrator | Wednesday 19 February 2025 09:05:07 +0000 (0:00:02.484) 0:00:05.511 **** 2025-02-19 09:08:06.282126 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-02-19 09:08:06.282138 | orchestrator | 2025-02-19 09:08:06.282151 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-02-19 09:08:06.282171 | orchestrator | Wednesday 19 February 2025 09:05:07 +0000 (0:00:00.594) 0:00:06.105 **** 2025-02-19 09:08:06.282183 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:08:06.282216 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:08:06.282230 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:08:06.282243 | orchestrator | 2025-02-19 09:08:06.282256 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-02-19 09:08:06.282268 | orchestrator | Wednesday 19 February 2025 09:05:08 +0000 (0:00:00.506) 0:00:06.611 **** 2025-02-19 09:08:06.282280 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:08:06.282293 | orchestrator | 2025-02-19 09:08:06.282305 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-19 09:08:06.282317 | orchestrator | Wednesday 19 February 2025 09:05:08 +0000 (0:00:00.550) 0:00:07.162 **** 2025-02-19 09:08:06.282330 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:08:06.282342 | orchestrator | 2025-02-19 09:08:06.282355 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-02-19 09:08:06.282367 | orchestrator | Wednesday 19 February 2025 09:05:09 +0000 (0:00:00.722) 0:00:07.885 **** 2025-02-19 09:08:06.282387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.282401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.282415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.282436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.282450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.282463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.282486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.282500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.282513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.282531 | orchestrator | 2025-02-19 09:08:06.282544 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-02-19 09:08:06.282563 | orchestrator | Wednesday 19 February 2025 09:05:13 +0000 (0:00:03.987) 0:00:11.872 **** 2025-02-19 09:08:06.282577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-19 09:08:06.282591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.282611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-19 09:08:06.282625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:08:06.282705 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.282720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.282743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:08:06.282755 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:08:06.282769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-19 09:08:06.282782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.282803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:08:06.282816 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:08:06.282829 | orchestrator | 2025-02-19 09:08:06.282842 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-02-19 09:08:06.282854 | orchestrator | Wednesday 19 February 2025 09:05:14 +0000 (0:00:01.248) 0:00:13.121 **** 2025-02-19 09:08:06.282867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-19 09:08:06.282887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.282901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:08:06.282914 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.282927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-19 09:08:06.282946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.282972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:08:06.283088 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:08:06.283105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-19 09:08:06.283120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.283135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-19 09:08:06.283149 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:08:06.283162 | orchestrator | 2025-02-19 09:08:06.283176 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-02-19 09:08:06.283189 | orchestrator | Wednesday 19 February 2025 09:05:15 +0000 (0:00:01.259) 0:00:14.381 **** 2025-02-19 09:08:06.283234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.283250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.283270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.283284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.283308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.283333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.283347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.283370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.283383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.283396 | orchestrator | 2025-02-19 09:08:06.283409 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-02-19 09:08:06.283422 | orchestrator | Wednesday 19 February 2025 09:05:19 +0000 (0:00:03.852) 0:00:18.234 **** 2025-02-19 09:08:06.283435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.283448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.283468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.283490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.283503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.283517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.283529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.283548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.283562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.283581 | orchestrator | 2025-02-19 09:08:06.283593 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-02-19 09:08:06.283606 | orchestrator | Wednesday 19 February 2025 09:05:30 +0000 (0:00:10.449) 0:00:28.683 **** 2025-02-19 09:08:06.283618 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:08:06.283631 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:08:06.283643 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:08:06.283656 | orchestrator | 2025-02-19 09:08:06.283668 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-02-19 09:08:06.283681 | orchestrator | Wednesday 19 February 2025 09:05:33 +0000 (0:00:02.948) 0:00:31.632 **** 2025-02-19 09:08:06.283693 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.283707 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:08:06.283722 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:08:06.283736 | orchestrator | 2025-02-19 09:08:06.283750 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-02-19 09:08:06.283764 | orchestrator | Wednesday 19 February 2025 09:05:35 +0000 (0:00:02.340) 0:00:33.973 **** 2025-02-19 09:08:06.283779 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.283794 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:08:06.283807 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:08:06.283822 | orchestrator | 2025-02-19 09:08:06.283836 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-02-19 09:08:06.283937 | orchestrator | Wednesday 19 February 2025 09:05:36 +0000 (0:00:00.513) 0:00:34.486 **** 2025-02-19 09:08:06.283954 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.283968 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:08:06.283981 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:08:06.283993 | orchestrator | 2025-02-19 09:08:06.284005 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-02-19 09:08:06.284018 | orchestrator | Wednesday 19 February 2025 09:05:36 +0000 (0:00:00.490) 0:00:34.976 **** 2025-02-19 09:08:06.284031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.284045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.284075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.284089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.284102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.284115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-19 09:08:06.284128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.284147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.284166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.284179 | orchestrator | 2025-02-19 09:08:06.284192 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-19 09:08:06.284246 | orchestrator | Wednesday 19 February 2025 09:05:39 +0000 (0:00:03.127) 0:00:38.104 **** 2025-02-19 09:08:06.284268 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.284289 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:08:06.284302 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:08:06.284314 | orchestrator | 2025-02-19 09:08:06.284326 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-02-19 09:08:06.284339 | orchestrator | Wednesday 19 February 2025 09:05:40 +0000 (0:00:00.486) 0:00:38.590 **** 2025-02-19 09:08:06.284351 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-19 09:08:06.284364 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-19 09:08:06.284377 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-19 09:08:06.284389 | orchestrator | 2025-02-19 09:08:06.284402 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-02-19 09:08:06.284414 | orchestrator | Wednesday 19 February 2025 09:05:43 +0000 (0:00:03.065) 0:00:41.656 **** 2025-02-19 09:08:06.284427 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:08:06.284559 | orchestrator | 2025-02-19 09:08:06.284573 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-02-19 09:08:06.284586 | orchestrator | Wednesday 19 February 2025 09:05:44 +0000 (0:00:01.076) 0:00:42.732 **** 2025-02-19 09:08:06.284598 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.284611 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:08:06.284623 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:08:06.284636 | orchestrator | 2025-02-19 09:08:06.284648 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-02-19 09:08:06.284660 | orchestrator | Wednesday 19 February 2025 09:05:46 +0000 (0:00:02.157) 0:00:44.889 **** 2025-02-19 09:08:06.284673 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-19 09:08:06.284685 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-19 09:08:06.284698 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:08:06.284710 | orchestrator | 2025-02-19 09:08:06.284722 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-02-19 09:08:06.284735 | orchestrator | Wednesday 19 February 2025 09:05:47 +0000 (0:00:01.368) 0:00:46.258 **** 2025-02-19 09:08:06.284747 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:08:06.284770 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:08:06.284782 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:08:06.284794 | orchestrator | 2025-02-19 09:08:06.284807 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-02-19 09:08:06.284819 | orchestrator | Wednesday 19 February 2025 09:05:48 +0000 (0:00:00.565) 0:00:46.823 **** 2025-02-19 09:08:06.284831 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-19 09:08:06.284844 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-19 09:08:06.284856 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-19 09:08:06.284869 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-19 09:08:06.284881 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-19 09:08:06.284893 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-19 09:08:06.284906 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-19 09:08:06.284918 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-19 09:08:06.284931 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-19 09:08:06.284943 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-19 09:08:06.284956 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-19 09:08:06.284968 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-19 09:08:06.284980 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-19 09:08:06.284992 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-19 09:08:06.285004 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-19 09:08:06.285017 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-19 09:08:06.285037 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-19 09:08:06.285050 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-19 09:08:06.285063 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-19 09:08:06.285075 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-19 09:08:06.285088 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-19 09:08:06.285100 | orchestrator | 2025-02-19 09:08:06.285113 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-02-19 09:08:06.285125 | orchestrator | Wednesday 19 February 2025 09:06:02 +0000 (0:00:14.542) 0:01:01.366 **** 2025-02-19 09:08:06.285137 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-19 09:08:06.285150 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-19 09:08:06.285162 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-19 09:08:06.285174 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-19 09:08:06.285189 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-19 09:08:06.285259 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-19 09:08:06.285275 | orchestrator | 2025-02-19 09:08:06.285296 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-02-19 09:08:06.285311 | orchestrator | Wednesday 19 February 2025 09:06:06 +0000 (0:00:03.976) 0:01:05.342 **** 2025-02-19 09:08:06.285327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.285344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.285368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-19 09:08:06.285383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.285396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.285415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-19 09:08:06.285428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.285441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.285451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-19 09:08:06.285461 | orchestrator | 2025-02-19 09:08:06.285472 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-19 09:08:06.285482 | orchestrator | Wednesday 19 February 2025 09:06:10 +0000 (0:00:03.192) 0:01:08.535 **** 2025-02-19 09:08:06.285492 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.285506 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:08:06.285517 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:08:06.285527 | orchestrator | 2025-02-19 09:08:06.285537 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-02-19 09:08:06.285547 | orchestrator | Wednesday 19 February 2025 09:06:10 +0000 (0:00:00.469) 0:01:09.004 **** 2025-02-19 09:08:06.285557 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:08:06.285568 | orchestrator | 2025-02-19 09:08:06.285578 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-02-19 09:08:06.285588 | orchestrator | Wednesday 19 February 2025 09:06:13 +0000 (0:00:02.776) 0:01:11.781 **** 2025-02-19 09:08:06.285598 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:08:06.285613 | orchestrator | 2025-02-19 09:08:06.285623 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-02-19 09:08:06.285633 | orchestrator | Wednesday 19 February 2025 09:06:16 +0000 (0:00:02.848) 0:01:14.629 **** 2025-02-19 09:08:06.285643 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:08:06.285654 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:08:06.285664 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:08:06.285674 | orchestrator | 2025-02-19 09:08:06.285684 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-02-19 09:08:06.285694 | orchestrator | Wednesday 19 February 2025 09:06:17 +0000 (0:00:01.005) 0:01:15.635 **** 2025-02-19 09:08:06.285705 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:08:06.285721 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:08:06.285732 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:08:06.285742 | orchestrator | 2025-02-19 09:08:06.285752 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-02-19 09:08:06.285762 | orchestrator | Wednesday 19 February 2025 09:06:17 +0000 (0:00:00.596) 0:01:16.232 **** 2025-02-19 09:08:06.285773 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.285783 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:08:06.285793 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:08:06.285803 | orchestrator | 2025-02-19 09:08:06.285813 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-02-19 09:08:06.285823 | orchestrator | Wednesday 19 February 2025 09:06:18 +0000 (0:00:00.524) 0:01:16.757 **** 2025-02-19 09:08:06.285833 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:08:06.285843 | orchestrator | 2025-02-19 09:08:06.285853 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-02-19 09:08:06.285863 | orchestrator | Wednesday 19 February 2025 09:06:33 +0000 (0:00:15.035) 0:01:31.792 **** 2025-02-19 09:08:06.285873 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:08:06.285883 | orchestrator | 2025-02-19 09:08:06.285897 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-02-19 09:08:06.285907 | orchestrator | Wednesday 19 February 2025 09:06:44 +0000 (0:00:11.531) 0:01:43.324 **** 2025-02-19 09:08:06.285918 | orchestrator | 2025-02-19 09:08:06.285928 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-02-19 09:08:06.285938 | orchestrator | Wednesday 19 February 2025 09:06:44 +0000 (0:00:00.064) 0:01:43.388 **** 2025-02-19 09:08:06.285949 | orchestrator | 2025-02-19 09:08:06.285959 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-02-19 09:08:06.285969 | orchestrator | Wednesday 19 February 2025 09:06:45 +0000 (0:00:00.206) 0:01:43.594 **** 2025-02-19 09:08:06.285979 | orchestrator | 2025-02-19 09:08:06.285989 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-02-19 09:08:06.285999 | orchestrator | Wednesday 19 February 2025 09:06:45 +0000 (0:00:00.073) 0:01:43.668 **** 2025-02-19 09:08:06.286009 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:08:06.286045 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:08:06.286057 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:08:06.286068 | orchestrator | 2025-02-19 09:08:06.286078 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-02-19 09:08:06.286088 | orchestrator | Wednesday 19 February 2025 09:06:53 +0000 (0:00:08.443) 0:01:52.111 **** 2025-02-19 09:08:06.286098 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:08:06.286109 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:08:06.286119 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:08:06.286129 | orchestrator | 2025-02-19 09:08:06.286139 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-02-19 09:08:06.286150 | orchestrator | Wednesday 19 February 2025 09:06:59 +0000 (0:00:05.652) 0:01:57.763 **** 2025-02-19 09:08:06.286160 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:08:06.286171 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:08:06.286181 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:08:06.286216 | orchestrator | 2025-02-19 09:08:06.286228 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-19 09:08:06.286238 | orchestrator | Wednesday 19 February 2025 09:07:10 +0000 (0:00:11.389) 0:02:09.153 **** 2025-02-19 09:08:06.286249 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:08:06.286259 | orchestrator | 2025-02-19 09:08:06.286269 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-02-19 09:08:06.286279 | orchestrator | Wednesday 19 February 2025 09:07:11 +0000 (0:00:00.651) 0:02:09.804 **** 2025-02-19 09:08:06.286290 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:08:06.286300 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:08:06.286311 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:08:06.286321 | orchestrator | 2025-02-19 09:08:06.286331 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-02-19 09:08:06.286342 | orchestrator | Wednesday 19 February 2025 09:07:12 +0000 (0:00:00.858) 0:02:10.663 **** 2025-02-19 09:08:06.286352 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:08:06.286362 | orchestrator | 2025-02-19 09:08:06.286372 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-02-19 09:08:06.286382 | orchestrator | Wednesday 19 February 2025 09:07:13 +0000 (0:00:01.380) 0:02:12.043 **** 2025-02-19 09:08:06.286393 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-02-19 09:08:06.286403 | orchestrator | 2025-02-19 09:08:06.286414 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-02-19 09:08:06.286424 | orchestrator | Wednesday 19 February 2025 09:07:24 +0000 (0:00:11.025) 0:02:23.068 **** 2025-02-19 09:08:06.286440 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-02-19 09:08:06.286450 | orchestrator | 2025-02-19 09:08:06.286461 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-02-19 09:08:06.286471 | orchestrator | Wednesday 19 February 2025 09:07:48 +0000 (0:00:24.127) 0:02:47.195 **** 2025-02-19 09:08:06.286482 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-02-19 09:08:06.286492 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-02-19 09:08:06.286503 | orchestrator | 2025-02-19 09:08:06.286513 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-02-19 09:08:06.286523 | orchestrator | Wednesday 19 February 2025 09:07:56 +0000 (0:00:07.942) 0:02:55.138 **** 2025-02-19 09:08:06.286534 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.286544 | orchestrator | 2025-02-19 09:08:06.286554 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-02-19 09:08:06.286565 | orchestrator | Wednesday 19 February 2025 09:07:56 +0000 (0:00:00.153) 0:02:55.292 **** 2025-02-19 09:08:06.286575 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.286585 | orchestrator | 2025-02-19 09:08:06.286596 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-02-19 09:08:06.286607 | orchestrator | Wednesday 19 February 2025 09:07:57 +0000 (0:00:00.410) 0:02:55.702 **** 2025-02-19 09:08:06.286617 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.286628 | orchestrator | 2025-02-19 09:08:06.286638 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-02-19 09:08:06.286649 | orchestrator | Wednesday 19 February 2025 09:07:57 +0000 (0:00:00.207) 0:02:55.910 **** 2025-02-19 09:08:06.286659 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.286670 | orchestrator | 2025-02-19 09:08:06.286684 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-02-19 09:08:06.286695 | orchestrator | Wednesday 19 February 2025 09:07:57 +0000 (0:00:00.520) 0:02:56.431 **** 2025-02-19 09:08:06.286705 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:08:06.286716 | orchestrator | 2025-02-19 09:08:06.286727 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-19 09:08:06.286737 | orchestrator | Wednesday 19 February 2025 09:08:02 +0000 (0:00:04.264) 0:03:00.696 **** 2025-02-19 09:08:06.286756 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:08:06.286766 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:08:06.286777 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:08:06.286787 | orchestrator | 2025-02-19 09:08:06.286797 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:08:06.286808 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-19 09:08:06.286819 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-02-19 09:08:06.286830 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-02-19 09:08:06.286840 | orchestrator | 2025-02-19 09:08:06.286854 | orchestrator | 2025-02-19 09:08:06.286924 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:08:06.286934 | orchestrator | Wednesday 19 February 2025 09:08:03 +0000 (0:00:01.250) 0:03:01.946 **** 2025-02-19 09:08:06.286946 | orchestrator | =============================================================================== 2025-02-19 09:08:06.286956 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.13s 2025-02-19 09:08:06.286966 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.04s 2025-02-19 09:08:06.286976 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 14.54s 2025-02-19 09:08:06.286986 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.53s 2025-02-19 09:08:06.286996 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.39s 2025-02-19 09:08:06.287006 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.03s 2025-02-19 09:08:06.287017 | orchestrator | keystone : Copying over keystone.conf ---------------------------------- 10.45s 2025-02-19 09:08:06.287026 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 8.44s 2025-02-19 09:08:06.287037 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.94s 2025-02-19 09:08:06.287047 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.65s 2025-02-19 09:08:06.287057 | orchestrator | keystone : Creating default user role ----------------------------------- 4.26s 2025-02-19 09:08:06.287067 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.99s 2025-02-19 09:08:06.287077 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.98s 2025-02-19 09:08:06.287087 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.85s 2025-02-19 09:08:06.287097 | orchestrator | keystone : Check keystone containers ------------------------------------ 3.19s 2025-02-19 09:08:06.287108 | orchestrator | keystone : Copying over existing policy file ---------------------------- 3.13s 2025-02-19 09:08:06.287118 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 3.07s 2025-02-19 09:08:06.287128 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.95s 2025-02-19 09:08:06.287138 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.85s 2025-02-19 09:08:06.287153 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.78s 2025-02-19 09:08:09.327551 | orchestrator | 2025-02-19 09:08:06 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:09.327675 | orchestrator | 2025-02-19 09:08:06 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:09.327695 | orchestrator | 2025-02-19 09:08:06 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:09.327711 | orchestrator | 2025-02-19 09:08:06 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:09.327751 | orchestrator | 2025-02-19 09:08:06 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:09.327766 | orchestrator | 2025-02-19 09:08:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:09.327800 | orchestrator | 2025-02-19 09:08:09 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:09.328789 | orchestrator | 2025-02-19 09:08:09 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:09.329435 | orchestrator | 2025-02-19 09:08:09 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:09.330566 | orchestrator | 2025-02-19 09:08:09 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:09.332807 | orchestrator | 2025-02-19 09:08:09 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:12.390135 | orchestrator | 2025-02-19 09:08:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:12.390372 | orchestrator | 2025-02-19 09:08:12 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:15.429401 | orchestrator | 2025-02-19 09:08:12 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:15.429633 | orchestrator | 2025-02-19 09:08:12 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:15.429660 | orchestrator | 2025-02-19 09:08:12 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:15.429688 | orchestrator | 2025-02-19 09:08:12 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:15.429705 | orchestrator | 2025-02-19 09:08:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:15.429738 | orchestrator | 2025-02-19 09:08:15 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:15.430422 | orchestrator | 2025-02-19 09:08:15 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:15.430455 | orchestrator | 2025-02-19 09:08:15 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:15.430473 | orchestrator | 2025-02-19 09:08:15 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:15.430497 | orchestrator | 2025-02-19 09:08:15 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:18.479406 | orchestrator | 2025-02-19 09:08:15 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:18.479572 | orchestrator | 2025-02-19 09:08:18 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:18.481167 | orchestrator | 2025-02-19 09:08:18 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:18.481258 | orchestrator | 2025-02-19 09:08:18 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:18.482238 | orchestrator | 2025-02-19 09:08:18 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:18.483646 | orchestrator | 2025-02-19 09:08:18 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:18.484340 | orchestrator | 2025-02-19 09:08:18 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:21.531049 | orchestrator | 2025-02-19 09:08:21 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:21.533520 | orchestrator | 2025-02-19 09:08:21 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:21.534395 | orchestrator | 2025-02-19 09:08:21 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:21.535838 | orchestrator | 2025-02-19 09:08:21 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:21.537014 | orchestrator | 2025-02-19 09:08:21 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:24.591476 | orchestrator | 2025-02-19 09:08:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:24.591610 | orchestrator | 2025-02-19 09:08:24 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:24.595023 | orchestrator | 2025-02-19 09:08:24 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:24.596296 | orchestrator | 2025-02-19 09:08:24 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:24.597359 | orchestrator | 2025-02-19 09:08:24 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:24.598443 | orchestrator | 2025-02-19 09:08:24 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:27.637189 | orchestrator | 2025-02-19 09:08:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:27.637378 | orchestrator | 2025-02-19 09:08:27 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:27.638655 | orchestrator | 2025-02-19 09:08:27 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:27.641099 | orchestrator | 2025-02-19 09:08:27 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:27.643536 | orchestrator | 2025-02-19 09:08:27 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:27.644200 | orchestrator | 2025-02-19 09:08:27 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:30.685644 | orchestrator | 2025-02-19 09:08:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:30.685959 | orchestrator | 2025-02-19 09:08:30 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:30.687257 | orchestrator | 2025-02-19 09:08:30 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:30.687319 | orchestrator | 2025-02-19 09:08:30 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:30.688146 | orchestrator | 2025-02-19 09:08:30 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:30.688626 | orchestrator | 2025-02-19 09:08:30 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:33.737442 | orchestrator | 2025-02-19 09:08:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:33.737584 | orchestrator | 2025-02-19 09:08:33 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:33.738894 | orchestrator | 2025-02-19 09:08:33 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:33.741773 | orchestrator | 2025-02-19 09:08:33 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:33.743925 | orchestrator | 2025-02-19 09:08:33 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:33.746129 | orchestrator | 2025-02-19 09:08:33 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:33.751981 | orchestrator | 2025-02-19 09:08:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:36.781062 | orchestrator | 2025-02-19 09:08:36 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:36.781695 | orchestrator | 2025-02-19 09:08:36 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:36.781748 | orchestrator | 2025-02-19 09:08:36 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:36.782863 | orchestrator | 2025-02-19 09:08:36 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:39.812388 | orchestrator | 2025-02-19 09:08:36 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:39.812508 | orchestrator | 2025-02-19 09:08:36 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:39.812546 | orchestrator | 2025-02-19 09:08:39 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:39.813723 | orchestrator | 2025-02-19 09:08:39 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:39.815166 | orchestrator | 2025-02-19 09:08:39 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:39.822125 | orchestrator | 2025-02-19 09:08:39 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:39.822768 | orchestrator | 2025-02-19 09:08:39 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:39.823382 | orchestrator | 2025-02-19 09:08:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:42.869186 | orchestrator | 2025-02-19 09:08:42 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:42.870355 | orchestrator | 2025-02-19 09:08:42 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:42.871282 | orchestrator | 2025-02-19 09:08:42 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:42.871953 | orchestrator | 2025-02-19 09:08:42 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:42.874301 | orchestrator | 2025-02-19 09:08:42 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:45.910666 | orchestrator | 2025-02-19 09:08:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:45.910808 | orchestrator | 2025-02-19 09:08:45 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:45.911002 | orchestrator | 2025-02-19 09:08:45 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:45.911803 | orchestrator | 2025-02-19 09:08:45 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:45.917019 | orchestrator | 2025-02-19 09:08:45 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:45.921169 | orchestrator | 2025-02-19 09:08:45 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:48.946772 | orchestrator | 2025-02-19 09:08:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:48.946947 | orchestrator | 2025-02-19 09:08:48 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:48.947460 | orchestrator | 2025-02-19 09:08:48 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:48.947755 | orchestrator | 2025-02-19 09:08:48 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state STARTED 2025-02-19 09:08:48.947784 | orchestrator | 2025-02-19 09:08:48 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:48.947979 | orchestrator | 2025-02-19 09:08:48 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:51.977287 | orchestrator | 2025-02-19 09:08:48 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:51.977428 | orchestrator | 2025-02-19 09:08:51 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:51.977634 | orchestrator | 2025-02-19 09:08:51 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:51.978473 | orchestrator | 2025-02-19 09:08:51 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:08:51.979207 | orchestrator | 2025-02-19 09:08:51 | INFO  | Task 84968773-1b7a-4a5c-9161-56cc91b1602c is in state SUCCESS 2025-02-19 09:08:51.983863 | orchestrator | 2025-02-19 09:08:51 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:51.984109 | orchestrator | 2025-02-19 09:08:51 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:55.053433 | orchestrator | 2025-02-19 09:08:51 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:55.053558 | orchestrator | 2025-02-19 09:08:55 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:58.088780 | orchestrator | 2025-02-19 09:08:55 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:58.088925 | orchestrator | 2025-02-19 09:08:55 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:08:58.088946 | orchestrator | 2025-02-19 09:08:55 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:58.088961 | orchestrator | 2025-02-19 09:08:55 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:08:58.088976 | orchestrator | 2025-02-19 09:08:55 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:08:58.089009 | orchestrator | 2025-02-19 09:08:58 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:08:58.089418 | orchestrator | 2025-02-19 09:08:58 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:08:58.089454 | orchestrator | 2025-02-19 09:08:58 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:08:58.089471 | orchestrator | 2025-02-19 09:08:58 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:08:58.089496 | orchestrator | 2025-02-19 09:08:58 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:01.127143 | orchestrator | 2025-02-19 09:08:58 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:01.127523 | orchestrator | 2025-02-19 09:09:01 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:01.127880 | orchestrator | 2025-02-19 09:09:01 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:01.127920 | orchestrator | 2025-02-19 09:09:01 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:01.128333 | orchestrator | 2025-02-19 09:09:01 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:09:01.128639 | orchestrator | 2025-02-19 09:09:01 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:04.182181 | orchestrator | 2025-02-19 09:09:01 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:04.182368 | orchestrator | 2025-02-19 09:09:04 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:04.182686 | orchestrator | 2025-02-19 09:09:04 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:04.182760 | orchestrator | 2025-02-19 09:09:04 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:04.183433 | orchestrator | 2025-02-19 09:09:04 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:09:04.184044 | orchestrator | 2025-02-19 09:09:04 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:07.255204 | orchestrator | 2025-02-19 09:09:04 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:07.255380 | orchestrator | 2025-02-19 09:09:07 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:07.262562 | orchestrator | 2025-02-19 09:09:07 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:07.262790 | orchestrator | 2025-02-19 09:09:07 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:07.264136 | orchestrator | 2025-02-19 09:09:07 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:09:07.265519 | orchestrator | 2025-02-19 09:09:07 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:10.318398 | orchestrator | 2025-02-19 09:09:07 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:10.318543 | orchestrator | 2025-02-19 09:09:10 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:10.319166 | orchestrator | 2025-02-19 09:09:10 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:10.319263 | orchestrator | 2025-02-19 09:09:10 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:10.319292 | orchestrator | 2025-02-19 09:09:10 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:09:10.319981 | orchestrator | 2025-02-19 09:09:10 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:13.374825 | orchestrator | 2025-02-19 09:09:10 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:13.374957 | orchestrator | 2025-02-19 09:09:13 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:13.380128 | orchestrator | 2025-02-19 09:09:13 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:13.383494 | orchestrator | 2025-02-19 09:09:13 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:13.383553 | orchestrator | 2025-02-19 09:09:13 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:09:13.385149 | orchestrator | 2025-02-19 09:09:13 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:16.437356 | orchestrator | 2025-02-19 09:09:13 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:16.437452 | orchestrator | 2025-02-19 09:09:16 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:16.437631 | orchestrator | 2025-02-19 09:09:16 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:16.439319 | orchestrator | 2025-02-19 09:09:16 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:16.440115 | orchestrator | 2025-02-19 09:09:16 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:09:16.440893 | orchestrator | 2025-02-19 09:09:16 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:19.486794 | orchestrator | 2025-02-19 09:09:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:19.486978 | orchestrator | 2025-02-19 09:09:19 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:19.487619 | orchestrator | 2025-02-19 09:09:19 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:19.488777 | orchestrator | 2025-02-19 09:09:19 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:19.490707 | orchestrator | 2025-02-19 09:09:19 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:09:19.491209 | orchestrator | 2025-02-19 09:09:19 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:22.542008 | orchestrator | 2025-02-19 09:09:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:22.542363 | orchestrator | 2025-02-19 09:09:22 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:22.542995 | orchestrator | 2025-02-19 09:09:22 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:22.543031 | orchestrator | 2025-02-19 09:09:22 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:22.543056 | orchestrator | 2025-02-19 09:09:22 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state STARTED 2025-02-19 09:09:22.543888 | orchestrator | 2025-02-19 09:09:22 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:22.545217 | orchestrator | 2025-02-19 09:09:22 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:25.595661 | orchestrator | 2025-02-19 09:09:25 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:25.597070 | orchestrator | 2025-02-19 09:09:25 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:25.597885 | orchestrator | 2025-02-19 09:09:25 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:25.597971 | orchestrator | 2025-02-19 09:09:25 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:25.597990 | orchestrator | 2025-02-19 09:09:25 | INFO  | Task 5ddbd34f-f00d-4e12-af30-bebb93348abe is in state SUCCESS 2025-02-19 09:09:25.598015 | orchestrator | 2025-02-19 09:09:25 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:25.600380 | orchestrator | 2025-02-19 09:09:25 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:25.600464 | orchestrator | 2025-02-19 09:09:25.600494 | orchestrator | 2025-02-19 09:09:25.600513 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:09:25.600531 | orchestrator | 2025-02-19 09:09:25.600554 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:09:25.600578 | orchestrator | Wednesday 19 February 2025 09:08:03 +0000 (0:00:00.851) 0:00:00.851 **** 2025-02-19 09:09:25.600602 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:09:25.600628 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:09:25.600650 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:09:25.600665 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:09:25.600679 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:09:25.600693 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:09:25.600707 | orchestrator | ok: [testbed-manager] 2025-02-19 09:09:25.600721 | orchestrator | 2025-02-19 09:09:25.600735 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:09:25.600763 | orchestrator | Wednesday 19 February 2025 09:08:05 +0000 (0:00:01.205) 0:00:02.057 **** 2025-02-19 09:09:25.600778 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-02-19 09:09:25.600797 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-02-19 09:09:25.600812 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-02-19 09:09:25.600850 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-02-19 09:09:25.600865 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-02-19 09:09:25.600878 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-02-19 09:09:25.600893 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-02-19 09:09:25.600907 | orchestrator | 2025-02-19 09:09:25.600921 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-02-19 09:09:25.600935 | orchestrator | 2025-02-19 09:09:25.600949 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-02-19 09:09:25.600965 | orchestrator | Wednesday 19 February 2025 09:08:06 +0000 (0:00:01.661) 0:00:03.718 **** 2025-02-19 09:09:25.600987 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-02-19 09:09:25.601014 | orchestrator | 2025-02-19 09:09:25.601037 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-02-19 09:09:25.601060 | orchestrator | Wednesday 19 February 2025 09:08:10 +0000 (0:00:03.367) 0:00:07.085 **** 2025-02-19 09:09:25.601084 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-02-19 09:09:25.601107 | orchestrator | 2025-02-19 09:09:25.601132 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-02-19 09:09:25.601157 | orchestrator | Wednesday 19 February 2025 09:08:14 +0000 (0:00:04.726) 0:00:11.812 **** 2025-02-19 09:09:25.601183 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-02-19 09:09:25.601214 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-02-19 09:09:25.601280 | orchestrator | 2025-02-19 09:09:25.601302 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-02-19 09:09:25.601322 | orchestrator | Wednesday 19 February 2025 09:08:23 +0000 (0:00:08.697) 0:00:20.510 **** 2025-02-19 09:09:25.601345 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-19 09:09:25.601368 | orchestrator | 2025-02-19 09:09:25.601389 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-02-19 09:09:25.601412 | orchestrator | Wednesday 19 February 2025 09:08:27 +0000 (0:00:03.758) 0:00:24.269 **** 2025-02-19 09:09:25.601434 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-19 09:09:25.601457 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-02-19 09:09:25.601479 | orchestrator | 2025-02-19 09:09:25.601501 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-02-19 09:09:25.601523 | orchestrator | Wednesday 19 February 2025 09:08:31 +0000 (0:00:04.321) 0:00:28.590 **** 2025-02-19 09:09:25.601544 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-19 09:09:25.601567 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-02-19 09:09:25.601589 | orchestrator | 2025-02-19 09:09:25.601611 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-02-19 09:09:25.601632 | orchestrator | Wednesday 19 February 2025 09:08:38 +0000 (0:00:07.031) 0:00:35.622 **** 2025-02-19 09:09:25.601653 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-02-19 09:09:25.601675 | orchestrator | 2025-02-19 09:09:25.601698 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:09:25.601721 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:09:25.601744 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:09:25.601777 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:09:25.601818 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:09:25.601842 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:09:25.601884 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:09:25.601909 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:09:25.601933 | orchestrator | 2025-02-19 09:09:25.601958 | orchestrator | 2025-02-19 09:09:25.601982 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:09:25.602005 | orchestrator | Wednesday 19 February 2025 09:08:48 +0000 (0:00:09.775) 0:00:45.397 **** 2025-02-19 09:09:25.602092 | orchestrator | =============================================================================== 2025-02-19 09:09:25.602124 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 9.78s 2025-02-19 09:09:25.602146 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 8.70s 2025-02-19 09:09:25.602166 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.03s 2025-02-19 09:09:25.602181 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.73s 2025-02-19 09:09:25.602195 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.32s 2025-02-19 09:09:25.602209 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.76s 2025-02-19 09:09:25.602254 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 3.37s 2025-02-19 09:09:25.602275 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.66s 2025-02-19 09:09:25.602289 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.21s 2025-02-19 09:09:25.602303 | orchestrator | 2025-02-19 09:09:25.602317 | orchestrator | 2025-02-19 09:09:25.602331 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:09:25.602345 | orchestrator | 2025-02-19 09:09:25.602359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:09:25.602373 | orchestrator | Wednesday 19 February 2025 09:08:03 +0000 (0:00:00.548) 0:00:00.548 **** 2025-02-19 09:09:25.602387 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:09:25.602402 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:09:25.602416 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:09:25.602430 | orchestrator | 2025-02-19 09:09:25.602452 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:09:25.602466 | orchestrator | Wednesday 19 February 2025 09:08:04 +0000 (0:00:00.745) 0:00:01.293 **** 2025-02-19 09:09:25.602481 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-02-19 09:09:25.602495 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-02-19 09:09:25.602509 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-02-19 09:09:25.602523 | orchestrator | 2025-02-19 09:09:25.602537 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-02-19 09:09:25.602551 | orchestrator | 2025-02-19 09:09:25.602564 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-02-19 09:09:25.602579 | orchestrator | Wednesday 19 February 2025 09:08:05 +0000 (0:00:00.549) 0:00:01.843 **** 2025-02-19 09:09:25.602593 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:09:25.602607 | orchestrator | 2025-02-19 09:09:25.602621 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-02-19 09:09:25.602635 | orchestrator | Wednesday 19 February 2025 09:08:06 +0000 (0:00:00.959) 0:00:02.803 **** 2025-02-19 09:09:25.602649 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-02-19 09:09:25.602675 | orchestrator | 2025-02-19 09:09:25.602689 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-02-19 09:09:25.602703 | orchestrator | Wednesday 19 February 2025 09:08:10 +0000 (0:00:04.412) 0:00:07.215 **** 2025-02-19 09:09:25.602717 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-02-19 09:09:25.602731 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-02-19 09:09:25.602745 | orchestrator | 2025-02-19 09:09:25.602759 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-02-19 09:09:25.602773 | orchestrator | Wednesday 19 February 2025 09:08:18 +0000 (0:00:07.969) 0:00:15.185 **** 2025-02-19 09:09:25.602786 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-02-19 09:09:25.602801 | orchestrator | 2025-02-19 09:09:25.602814 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-02-19 09:09:25.602828 | orchestrator | Wednesday 19 February 2025 09:08:22 +0000 (0:00:04.391) 0:00:19.583 **** 2025-02-19 09:09:25.602842 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-19 09:09:25.602856 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-02-19 09:09:25.602870 | orchestrator | 2025-02-19 09:09:25.602884 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-02-19 09:09:25.602898 | orchestrator | Wednesday 19 February 2025 09:08:27 +0000 (0:00:04.754) 0:00:24.337 **** 2025-02-19 09:09:25.602912 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-19 09:09:25.602926 | orchestrator | 2025-02-19 09:09:25.602940 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-02-19 09:09:25.602954 | orchestrator | Wednesday 19 February 2025 09:08:31 +0000 (0:00:03.579) 0:00:27.916 **** 2025-02-19 09:09:25.602968 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-02-19 09:09:25.602982 | orchestrator | 2025-02-19 09:09:25.602995 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-02-19 09:09:25.603009 | orchestrator | Wednesday 19 February 2025 09:08:36 +0000 (0:00:04.829) 0:00:32.746 **** 2025-02-19 09:09:25.603044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-19 09:09:25.603063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-19 09:09:25.603098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-19 09:09:25.603114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-19 09:09:25.603144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-19 09:09:25.603160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-19 09:09:25.603181 | orchestrator | 2025-02-19 09:09:25.603196 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-02-19 09:09:25.603212 | orchestrator | Wednesday 19 February 2025 09:08:48 +0000 (0:00:12.484) 0:00:45.230 **** 2025-02-19 09:09:25.603408 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:09:25.603431 | orchestrator | 2025-02-19 09:09:25.603445 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-02-19 09:09:25.603459 | orchestrator | Wednesday 19 February 2025 09:08:50 +0000 (0:00:02.075) 0:00:47.306 **** 2025-02-19 09:09:25.603472 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:09:25.603485 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:09:25.603497 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:09:25.603509 | orchestrator | 2025-02-19 09:09:25.603521 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-02-19 09:09:25.603533 | orchestrator | Wednesday 19 February 2025 09:09:18 +0000 (0:00:28.102) 0:01:15.408 **** 2025-02-19 09:09:25.603545 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-19 09:09:25.603558 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-19 09:09:25.603571 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-02-19 09:09:25.603583 | orchestrator | 2025-02-19 09:09:25.603595 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-02-19 09:09:25.603607 | orchestrator | Wednesday 19 February 2025 09:09:22 +0000 (0:00:03.457) 0:01:18.866 **** 2025-02-19 09:09:25.603620 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option 2025-02-19 09:09:25.603645 | orchestrator | failed: [testbed-node-0] (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) => {"ansible_loop_var": "item", "changed": false, "item": {"cluster": "ceph", "enabled": true, "name": "rbd", "type": "rbd"}, "msg": "Could not find or access '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"} 2025-02-19 09:09:28.647374 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option 2025-02-19 09:09:28.647480 | orchestrator | failed: [testbed-node-1] (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) => {"ansible_loop_var": "item", "changed": false, "item": {"cluster": "ceph", "enabled": true, "name": "rbd", "type": "rbd"}, "msg": "Could not find or access '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"} 2025-02-19 09:09:28.647525 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option 2025-02-19 09:09:28.647537 | orchestrator | failed: [testbed-node-2] (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) => {"ansible_loop_var": "item", "changed": false, "item": {"cluster": "ceph", "enabled": true, "name": "rbd", "type": "rbd"}, "msg": "Could not find or access '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"} 2025-02-19 09:09:28.647547 | orchestrator | 2025-02-19 09:09:28.647559 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:09:28.647570 | orchestrator | testbed-node-0 : ok=13  changed=8  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-19 09:09:28.647582 | orchestrator | testbed-node-1 : ok=7  changed=3  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-19 09:09:28.647592 | orchestrator | testbed-node-2 : ok=7  changed=3  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-19 09:09:28.647601 | orchestrator | 2025-02-19 09:09:28.647611 | orchestrator | 2025-02-19 09:09:28.647621 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:09:28.647630 | orchestrator | Wednesday 19 February 2025 09:09:22 +0000 (0:00:00.527) 0:01:19.394 **** 2025-02-19 09:09:28.647640 | orchestrator | =============================================================================== 2025-02-19 09:09:28.647663 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 28.10s 2025-02-19 09:09:28.647673 | orchestrator | glance : Ensuring config directories exist ----------------------------- 12.48s 2025-02-19 09:09:28.647682 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.97s 2025-02-19 09:09:28.647692 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.83s 2025-02-19 09:09:28.647701 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.75s 2025-02-19 09:09:28.647711 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.41s 2025-02-19 09:09:28.647721 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.40s 2025-02-19 09:09:28.647730 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.58s 2025-02-19 09:09:28.647740 | orchestrator | glance : Copy over multiple ceph configs for Glance --------------------- 3.46s 2025-02-19 09:09:28.647750 | orchestrator | glance : include_tasks -------------------------------------------------- 2.08s 2025-02-19 09:09:28.647759 | orchestrator | glance : include_tasks -------------------------------------------------- 0.96s 2025-02-19 09:09:28.647769 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.75s 2025-02-19 09:09:28.647779 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-02-19 09:09:28.647788 | orchestrator | glance : Copy over ceph Glance keyrings --------------------------------- 0.53s 2025-02-19 09:09:28.647812 | orchestrator | 2025-02-19 09:09:28 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:28.649881 | orchestrator | 2025-02-19 09:09:28 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:28.653793 | orchestrator | 2025-02-19 09:09:28 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:28.658271 | orchestrator | 2025-02-19 09:09:28 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:28.660629 | orchestrator | 2025-02-19 09:09:28 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:31.706759 | orchestrator | 2025-02-19 09:09:28 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:31.706878 | orchestrator | 2025-02-19 09:09:31 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:31.708101 | orchestrator | 2025-02-19 09:09:31 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:31.708932 | orchestrator | 2025-02-19 09:09:31 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:31.709717 | orchestrator | 2025-02-19 09:09:31 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:31.710494 | orchestrator | 2025-02-19 09:09:31 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:34.747455 | orchestrator | 2025-02-19 09:09:31 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:34.747595 | orchestrator | 2025-02-19 09:09:34 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:34.747982 | orchestrator | 2025-02-19 09:09:34 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:34.748942 | orchestrator | 2025-02-19 09:09:34 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:34.749939 | orchestrator | 2025-02-19 09:09:34 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:34.750888 | orchestrator | 2025-02-19 09:09:34 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:37.792300 | orchestrator | 2025-02-19 09:09:34 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:37.792419 | orchestrator | 2025-02-19 09:09:37 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:37.794051 | orchestrator | 2025-02-19 09:09:37 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:37.794110 | orchestrator | 2025-02-19 09:09:37 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:37.794131 | orchestrator | 2025-02-19 09:09:37 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:37.797471 | orchestrator | 2025-02-19 09:09:37 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:40.845945 | orchestrator | 2025-02-19 09:09:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:40.846128 | orchestrator | 2025-02-19 09:09:40 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:40.846395 | orchestrator | 2025-02-19 09:09:40 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:40.847333 | orchestrator | 2025-02-19 09:09:40 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:40.849559 | orchestrator | 2025-02-19 09:09:40 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:40.851010 | orchestrator | 2025-02-19 09:09:40 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:43.903860 | orchestrator | 2025-02-19 09:09:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:43.903998 | orchestrator | 2025-02-19 09:09:43 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:43.905431 | orchestrator | 2025-02-19 09:09:43 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:43.907821 | orchestrator | 2025-02-19 09:09:43 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:43.909491 | orchestrator | 2025-02-19 09:09:43 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:43.910945 | orchestrator | 2025-02-19 09:09:43 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:46.959481 | orchestrator | 2025-02-19 09:09:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:46.959600 | orchestrator | 2025-02-19 09:09:46 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:46.963481 | orchestrator | 2025-02-19 09:09:46 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:46.966213 | orchestrator | 2025-02-19 09:09:46 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:46.970156 | orchestrator | 2025-02-19 09:09:46 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:46.973399 | orchestrator | 2025-02-19 09:09:46 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:50.014732 | orchestrator | 2025-02-19 09:09:46 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:50.014870 | orchestrator | 2025-02-19 09:09:50 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:50.017495 | orchestrator | 2025-02-19 09:09:50 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:50.020702 | orchestrator | 2025-02-19 09:09:50 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:50.021682 | orchestrator | 2025-02-19 09:09:50 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:50.022445 | orchestrator | 2025-02-19 09:09:50 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:53.069393 | orchestrator | 2025-02-19 09:09:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:53.069640 | orchestrator | 2025-02-19 09:09:53 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:53.072212 | orchestrator | 2025-02-19 09:09:53 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:53.072280 | orchestrator | 2025-02-19 09:09:53 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:53.072304 | orchestrator | 2025-02-19 09:09:53 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:53.074315 | orchestrator | 2025-02-19 09:09:53 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:56.122190 | orchestrator | 2025-02-19 09:09:53 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:56.122326 | orchestrator | 2025-02-19 09:09:56 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:56.122510 | orchestrator | 2025-02-19 09:09:56 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:56.123210 | orchestrator | 2025-02-19 09:09:56 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:56.124769 | orchestrator | 2025-02-19 09:09:56 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:56.125402 | orchestrator | 2025-02-19 09:09:56 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:56.128788 | orchestrator | 2025-02-19 09:09:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:09:59.190672 | orchestrator | 2025-02-19 09:09:59 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:09:59.191084 | orchestrator | 2025-02-19 09:09:59 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:09:59.193764 | orchestrator | 2025-02-19 09:09:59 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:09:59.195506 | orchestrator | 2025-02-19 09:09:59 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:09:59.199154 | orchestrator | 2025-02-19 09:09:59 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:09:59.199593 | orchestrator | 2025-02-19 09:09:59 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:02.269717 | orchestrator | 2025-02-19 09:10:02 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:02.271651 | orchestrator | 2025-02-19 09:10:02 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:02.272942 | orchestrator | 2025-02-19 09:10:02 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:02.274523 | orchestrator | 2025-02-19 09:10:02 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:02.276334 | orchestrator | 2025-02-19 09:10:02 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:05.322183 | orchestrator | 2025-02-19 09:10:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:05.322404 | orchestrator | 2025-02-19 09:10:05 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:05.323080 | orchestrator | 2025-02-19 09:10:05 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:05.323198 | orchestrator | 2025-02-19 09:10:05 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:05.323261 | orchestrator | 2025-02-19 09:10:05 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:05.323673 | orchestrator | 2025-02-19 09:10:05 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:08.380582 | orchestrator | 2025-02-19 09:10:05 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:08.380722 | orchestrator | 2025-02-19 09:10:08 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:08.383928 | orchestrator | 2025-02-19 09:10:08 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:08.385011 | orchestrator | 2025-02-19 09:10:08 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:08.387082 | orchestrator | 2025-02-19 09:10:08 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:08.391452 | orchestrator | 2025-02-19 09:10:08 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:11.434985 | orchestrator | 2025-02-19 09:10:08 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:11.435127 | orchestrator | 2025-02-19 09:10:11 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:11.437080 | orchestrator | 2025-02-19 09:10:11 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:11.439738 | orchestrator | 2025-02-19 09:10:11 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:11.441021 | orchestrator | 2025-02-19 09:10:11 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:11.442395 | orchestrator | 2025-02-19 09:10:11 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:11.442670 | orchestrator | 2025-02-19 09:10:11 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:14.495803 | orchestrator | 2025-02-19 09:10:14 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:14.498152 | orchestrator | 2025-02-19 09:10:14 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:14.500998 | orchestrator | 2025-02-19 09:10:14 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:14.502762 | orchestrator | 2025-02-19 09:10:14 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:14.505134 | orchestrator | 2025-02-19 09:10:14 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:14.505588 | orchestrator | 2025-02-19 09:10:14 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:17.559297 | orchestrator | 2025-02-19 09:10:17 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:17.560446 | orchestrator | 2025-02-19 09:10:17 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:17.563143 | orchestrator | 2025-02-19 09:10:17 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:17.564297 | orchestrator | 2025-02-19 09:10:17 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:17.565580 | orchestrator | 2025-02-19 09:10:17 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:17.565789 | orchestrator | 2025-02-19 09:10:17 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:20.606445 | orchestrator | 2025-02-19 09:10:20 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:20.606703 | orchestrator | 2025-02-19 09:10:20 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:20.608532 | orchestrator | 2025-02-19 09:10:20 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:20.609154 | orchestrator | 2025-02-19 09:10:20 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:20.610090 | orchestrator | 2025-02-19 09:10:20 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:23.672380 | orchestrator | 2025-02-19 09:10:20 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:23.672528 | orchestrator | 2025-02-19 09:10:23 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:23.674807 | orchestrator | 2025-02-19 09:10:23 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:23.680465 | orchestrator | 2025-02-19 09:10:23 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:23.681489 | orchestrator | 2025-02-19 09:10:23 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:23.684635 | orchestrator | 2025-02-19 09:10:23 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:26.725526 | orchestrator | 2025-02-19 09:10:23 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:26.725645 | orchestrator | 2025-02-19 09:10:26 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:26.727180 | orchestrator | 2025-02-19 09:10:26 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:26.729770 | orchestrator | 2025-02-19 09:10:26 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:26.730294 | orchestrator | 2025-02-19 09:10:26 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:26.730345 | orchestrator | 2025-02-19 09:10:26 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:29.807565 | orchestrator | 2025-02-19 09:10:26 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:29.807658 | orchestrator | 2025-02-19 09:10:29 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:29.809975 | orchestrator | 2025-02-19 09:10:29 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:29.814087 | orchestrator | 2025-02-19 09:10:29 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:29.816623 | orchestrator | 2025-02-19 09:10:29 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:29.821324 | orchestrator | 2025-02-19 09:10:29 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:32.864694 | orchestrator | 2025-02-19 09:10:29 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:32.864834 | orchestrator | 2025-02-19 09:10:32 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:32.865313 | orchestrator | 2025-02-19 09:10:32 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:32.865350 | orchestrator | 2025-02-19 09:10:32 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:32.866848 | orchestrator | 2025-02-19 09:10:32 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:32.869284 | orchestrator | 2025-02-19 09:10:32 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:35.942772 | orchestrator | 2025-02-19 09:10:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:35.942911 | orchestrator | 2025-02-19 09:10:35 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:35.946942 | orchestrator | 2025-02-19 09:10:35 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:35.947541 | orchestrator | 2025-02-19 09:10:35 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:35.948563 | orchestrator | 2025-02-19 09:10:35 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:35.952377 | orchestrator | 2025-02-19 09:10:35 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:39.005075 | orchestrator | 2025-02-19 09:10:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:39.005294 | orchestrator | 2025-02-19 09:10:39 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:39.006223 | orchestrator | 2025-02-19 09:10:39 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:39.007723 | orchestrator | 2025-02-19 09:10:39 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:39.009225 | orchestrator | 2025-02-19 09:10:39 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:39.012343 | orchestrator | 2025-02-19 09:10:39 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:42.062098 | orchestrator | 2025-02-19 09:10:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:42.062240 | orchestrator | 2025-02-19 09:10:42 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:42.063076 | orchestrator | 2025-02-19 09:10:42 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:42.064301 | orchestrator | 2025-02-19 09:10:42 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:42.065024 | orchestrator | 2025-02-19 09:10:42 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:42.066842 | orchestrator | 2025-02-19 09:10:42 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:45.105888 | orchestrator | 2025-02-19 09:10:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:45.106206 | orchestrator | 2025-02-19 09:10:45 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:45.107313 | orchestrator | 2025-02-19 09:10:45 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:45.107351 | orchestrator | 2025-02-19 09:10:45 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:45.107463 | orchestrator | 2025-02-19 09:10:45 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:45.107569 | orchestrator | 2025-02-19 09:10:45 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:45.107593 | orchestrator | 2025-02-19 09:10:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:48.168840 | orchestrator | 2025-02-19 09:10:48 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:48.170568 | orchestrator | 2025-02-19 09:10:48 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:48.170626 | orchestrator | 2025-02-19 09:10:48 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:48.170652 | orchestrator | 2025-02-19 09:10:48 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:48.172979 | orchestrator | 2025-02-19 09:10:48 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:51.220681 | orchestrator | 2025-02-19 09:10:48 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:51.220852 | orchestrator | 2025-02-19 09:10:51 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state STARTED 2025-02-19 09:10:51.221737 | orchestrator | 2025-02-19 09:10:51 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:51.221793 | orchestrator | 2025-02-19 09:10:51 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:51.226191 | orchestrator | 2025-02-19 09:10:51 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:51.229824 | orchestrator | 2025-02-19 09:10:51 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:54.310396 | orchestrator | 2025-02-19 09:10:51 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:54.310545 | orchestrator | 2025-02-19 09:10:54 | INFO  | Task a21bd136-e9ae-41ae-a086-daa3c821d65a is in state SUCCESS 2025-02-19 09:10:54.312216 | orchestrator | 2025-02-19 09:10:54 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:54.312289 | orchestrator | 2025-02-19 09:10:54 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:54.312305 | orchestrator | 2025-02-19 09:10:54 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:54.312320 | orchestrator | 2025-02-19 09:10:54 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:54.312334 | orchestrator | 2025-02-19 09:10:54 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:10:54.312357 | orchestrator | 2025-02-19 09:10:54.312401 | orchestrator | 2025-02-19 09:10:54.312416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:10:54.312430 | orchestrator | 2025-02-19 09:10:54.312444 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:10:54.312458 | orchestrator | Wednesday 19 February 2025 09:08:09 +0000 (0:00:00.575) 0:00:00.575 **** 2025-02-19 09:10:54.312472 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:10:54.312488 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:10:54.312502 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:10:54.312516 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:10:54.312530 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:10:54.312544 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:10:54.312558 | orchestrator | 2025-02-19 09:10:54.312572 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:10:54.312586 | orchestrator | Wednesday 19 February 2025 09:08:11 +0000 (0:00:01.240) 0:00:01.816 **** 2025-02-19 09:10:54.312600 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-02-19 09:10:54.312614 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-02-19 09:10:54.312629 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-02-19 09:10:54.312642 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-02-19 09:10:54.312656 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-02-19 09:10:54.312670 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-02-19 09:10:54.313088 | orchestrator | 2025-02-19 09:10:54.313105 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-02-19 09:10:54.313120 | orchestrator | 2025-02-19 09:10:54.313134 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-19 09:10:54.313148 | orchestrator | Wednesday 19 February 2025 09:08:12 +0000 (0:00:00.972) 0:00:02.789 **** 2025-02-19 09:10:54.313162 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:10:54.313177 | orchestrator | 2025-02-19 09:10:54.313191 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-02-19 09:10:54.313205 | orchestrator | Wednesday 19 February 2025 09:08:14 +0000 (0:00:01.890) 0:00:04.679 **** 2025-02-19 09:10:54.313219 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-02-19 09:10:54.313233 | orchestrator | 2025-02-19 09:10:54.313276 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-02-19 09:10:54.313292 | orchestrator | Wednesday 19 February 2025 09:08:17 +0000 (0:00:03.691) 0:00:08.371 **** 2025-02-19 09:10:54.313307 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-02-19 09:10:54.313336 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-02-19 09:10:54.313351 | orchestrator | 2025-02-19 09:10:54.313365 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-02-19 09:10:54.313379 | orchestrator | Wednesday 19 February 2025 09:08:25 +0000 (0:00:08.098) 0:00:16.469 **** 2025-02-19 09:10:54.313393 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-19 09:10:54.313450 | orchestrator | 2025-02-19 09:10:54.313466 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-02-19 09:10:54.313480 | orchestrator | Wednesday 19 February 2025 09:08:29 +0000 (0:00:03.895) 0:00:20.365 **** 2025-02-19 09:10:54.313495 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-19 09:10:54.313509 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-02-19 09:10:54.313524 | orchestrator | 2025-02-19 09:10:54.313538 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-02-19 09:10:54.313553 | orchestrator | Wednesday 19 February 2025 09:08:34 +0000 (0:00:04.685) 0:00:25.051 **** 2025-02-19 09:10:54.313567 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-19 09:10:54.313595 | orchestrator | 2025-02-19 09:10:54.313619 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-02-19 09:10:54.313640 | orchestrator | Wednesday 19 February 2025 09:08:37 +0000 (0:00:03.488) 0:00:28.539 **** 2025-02-19 09:10:54.313655 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-02-19 09:10:54.313996 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-02-19 09:10:54.314059 | orchestrator | 2025-02-19 09:10:54.314078 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-02-19 09:10:54.314092 | orchestrator | Wednesday 19 February 2025 09:08:47 +0000 (0:00:10.037) 0:00:38.576 **** 2025-02-19 09:10:54.314112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.314170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.314188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.314204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.314230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.314269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.314315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.314333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.314349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.314365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.314388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.314403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.314446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.314462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.314477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.314499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.314514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.314555 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.314572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.314586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.314610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.314625 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.314666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.314683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.314697 | orchestrator | 2025-02-19 09:10:54.314712 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-19 09:10:54.314726 | orchestrator | Wednesday 19 February 2025 09:08:52 +0000 (0:00:04.275) 0:00:42.852 **** 2025-02-19 09:10:54.314740 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:10:54.314755 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:10:54.314769 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:10:54.314783 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:10:54.314797 | orchestrator | 2025-02-19 09:10:54.314811 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-02-19 09:10:54.314825 | orchestrator | Wednesday 19 February 2025 09:08:55 +0000 (0:00:03.494) 0:00:46.347 **** 2025-02-19 09:10:54.314838 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-02-19 09:10:54.314859 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-02-19 09:10:54.314874 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-02-19 09:10:54.314887 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-02-19 09:10:54.314901 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-02-19 09:10:54.314915 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-02-19 09:10:54.314929 | orchestrator | 2025-02-19 09:10:54.314943 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-02-19 09:10:54.314957 | orchestrator | Wednesday 19 February 2025 09:09:03 +0000 (0:00:07.839) 0:00:54.186 **** 2025-02-19 09:10:54.314972 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-19 09:10:54.314988 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-19 09:10:54.315030 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-19 09:10:54.315046 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-19 09:10:54.315069 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-19 09:10:54.315084 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-02-19 09:10:54.315100 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-19 09:10:54.315121 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-19 09:10:54.315137 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-19 09:10:54.315161 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-19 09:10:54.315434 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-19 09:10:54.315454 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-02-19 09:10:54.315467 | orchestrator | 2025-02-19 09:10:54.315480 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-02-19 09:10:54.315493 | orchestrator | Wednesday 19 February 2025 09:09:13 +0000 (0:00:09.714) 0:01:03.901 **** 2025-02-19 09:10:54.315506 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option 2025-02-19 09:10:54.315531 | orchestrator | failed: [testbed-node-3] (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) => {"ansible_loop_var": "item", "changed": false, "item": {"cluster": "ceph", "enabled": true, "name": "rbd-1"}, "msg": "Could not find or access '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"} 2025-02-19 09:10:54.315576 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option 2025-02-19 09:10:54.315596 | orchestrator | failed: [testbed-node-4] (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) => {"ansible_loop_var": "item", "changed": false, "item": {"cluster": "ceph", "enabled": true, "name": "rbd-1"}, "msg": "Could not find or access '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"} 2025-02-19 09:10:54.315619 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option 2025-02-19 09:10:54.315632 | orchestrator | failed: [testbed-node-5] (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) => {"ansible_loop_var": "item", "changed": false, "item": {"cluster": "ceph", "enabled": true, "name": "rbd-1"}, "msg": "Could not find or access '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"} 2025-02-19 09:10:54.315644 | orchestrator | 2025-02-19 09:10:54.315657 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-02-19 09:10:54.315669 | orchestrator | Wednesday 19 February 2025 09:09:17 +0000 (0:00:04.364) 0:01:08.266 **** 2025-02-19 09:10:54.315682 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:10:54.315705 | orchestrator | 2025-02-19 09:10:54.315719 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-02-19 09:10:54.315731 | orchestrator | Wednesday 19 February 2025 09:09:17 +0000 (0:00:00.264) 0:01:08.531 **** 2025-02-19 09:10:54.315743 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:10:54.315756 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:10:54.315768 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:10:54.315780 | orchestrator | 2025-02-19 09:10:54.315792 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-19 09:10:54.315805 | orchestrator | Wednesday 19 February 2025 09:09:18 +0000 (0:00:00.959) 0:01:09.491 **** 2025-02-19 09:10:54.315818 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:10:54.315830 | orchestrator | 2025-02-19 09:10:54.315842 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-02-19 09:10:54.315855 | orchestrator | Wednesday 19 February 2025 09:09:21 +0000 (0:00:02.202) 0:01:11.693 **** 2025-02-19 09:10:54.315868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.315887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.315907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.315921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.315951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.315965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.315979 | orchestrator | 2025-02-19 09:10:54.315991 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-02-19 09:10:54.316004 | orchestrator | Wednesday 19 February 2025 09:09:24 +0000 (0:00:03.522) 0:01:15.216 **** 2025-02-19 09:10:54.316016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.316036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.316070 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:10:54.316092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316105 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:10:54.316119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.316373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316390 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:10:54.316401 | orchestrator | 2025-02-19 09:10:54.316411 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-02-19 09:10:54.316421 | orchestrator | Wednesday 19 February 2025 09:09:25 +0000 (0:00:00.941) 0:01:16.157 **** 2025-02-19 09:10:54.316439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.316456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.316478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316489 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:10:54.316499 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:10:54.316509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.316520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316535 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:10:54.316546 | orchestrator | 2025-02-19 09:10:54.316556 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-02-19 09:10:54.316571 | orchestrator | Wednesday 19 February 2025 09:09:26 +0000 (0:00:01.397) 0:01:17.554 **** 2025-02-19 09:10:54.316582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.316592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.316603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.316614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.316634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.316667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.316708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316730 | orchestrator | 2025-02-19 09:10:54.316740 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-02-19 09:10:54.316750 | orchestrator | Wednesday 19 February 2025 09:09:30 +0000 (0:00:03.819) 0:01:21.374 **** 2025-02-19 09:10:54.316761 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-02-19 09:10:54.316772 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-02-19 09:10:54.316782 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-02-19 09:10:54.316792 | orchestrator | 2025-02-19 09:10:54.316802 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-02-19 09:10:54.316812 | orchestrator | Wednesday 19 February 2025 09:09:33 +0000 (0:00:03.217) 0:01:24.591 **** 2025-02-19 09:10:54.316823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.316833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.316855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.316866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.316877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.316913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.316950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.316978 | orchestrator | 2025-02-19 09:10:54.316988 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-02-19 09:10:54.316999 | orchestrator | Wednesday 19 February 2025 09:09:44 +0000 (0:00:10.921) 0:01:35.513 **** 2025-02-19 09:10:54.317009 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:10:54.317019 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:10:54.317029 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:10:54.317039 | orchestrator | 2025-02-19 09:10:54.317052 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-02-19 09:10:54.317063 | orchestrator | Wednesday 19 February 2025 09:09:45 +0000 (0:00:01.058) 0:01:36.572 **** 2025-02-19 09:10:54.317073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.317089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317121 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:10:54.317166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.317184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317222 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:10:54.317232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-19 09:10:54.317243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317298 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:10:54.317308 | orchestrator | 2025-02-19 09:10:54.317319 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-02-19 09:10:54.317329 | orchestrator | Wednesday 19 February 2025 09:09:46 +0000 (0:00:00.899) 0:01:37.471 **** 2025-02-19 09:10:54.317339 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:10:54.317349 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:10:54.317359 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:10:54.317369 | orchestrator | 2025-02-19 09:10:54.317383 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-02-19 09:10:54.317394 | orchestrator | Wednesday 19 February 2025 09:09:47 +0000 (0:00:00.556) 0:01:38.028 **** 2025-02-19 09:10:54.317404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.317415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.317432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-19 09:10:54.317443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.317458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.317497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:10:54.317535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-19 09:10:54.317564 | orchestrator | 2025-02-19 09:10:54.317574 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-02-19 09:10:54.317584 | orchestrator | Wednesday 19 February 2025 09:09:50 +0000 (0:00:02.797) 0:01:40.826 **** 2025-02-19 09:10:54.317595 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:10:54.317605 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:10:54.317615 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:10:54.317625 | orchestrator | 2025-02-19 09:10:54.317635 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-02-19 09:10:54.317645 | orchestrator | Wednesday 19 February 2025 09:09:50 +0000 (0:00:00.502) 0:01:41.328 **** 2025-02-19 09:10:54.317655 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:10:54.317665 | orchestrator | 2025-02-19 09:10:54.317675 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-02-19 09:10:54.317685 | orchestrator | Wednesday 19 February 2025 09:09:53 +0000 (0:00:02.589) 0:01:43.918 **** 2025-02-19 09:10:54.317695 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:10:54.317705 | orchestrator | 2025-02-19 09:10:54.317716 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-02-19 09:10:54.317738 | orchestrator | Wednesday 19 February 2025 09:09:56 +0000 (0:00:02.771) 0:01:46.689 **** 2025-02-19 09:10:54.317748 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:10:54.317758 | orchestrator | 2025-02-19 09:10:54.317768 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-19 09:10:54.317778 | orchestrator | Wednesday 19 February 2025 09:10:16 +0000 (0:00:20.082) 0:02:06.771 **** 2025-02-19 09:10:54.317788 | orchestrator | 2025-02-19 09:10:54.317798 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-19 09:10:54.317808 | orchestrator | Wednesday 19 February 2025 09:10:16 +0000 (0:00:00.348) 0:02:07.120 **** 2025-02-19 09:10:54.317818 | orchestrator | 2025-02-19 09:10:54.317828 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-02-19 09:10:54.317839 | orchestrator | Wednesday 19 February 2025 09:10:16 +0000 (0:00:00.324) 0:02:07.445 **** 2025-02-19 09:10:54.317848 | orchestrator | 2025-02-19 09:10:54.317859 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-02-19 09:10:54.317869 | orchestrator | Wednesday 19 February 2025 09:10:17 +0000 (0:00:00.276) 0:02:07.722 **** 2025-02-19 09:10:54.317879 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:10:54.317889 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:10:54.317899 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:10:54.317909 | orchestrator | 2025-02-19 09:10:54.317919 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-02-19 09:10:54.317929 | orchestrator | Wednesday 19 February 2025 09:10:34 +0000 (0:00:17.239) 0:02:24.961 **** 2025-02-19 09:10:54.317938 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:10:54.317948 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:10:54.317958 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:10:54.317968 | orchestrator | 2025-02-19 09:10:54.317978 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-02-19 09:10:54.317993 | orchestrator | Wednesday 19 February 2025 09:10:51 +0000 (0:00:16.784) 0:02:41.746 **** 2025-02-19 09:10:54.318003 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:10:54.318059 | orchestrator | 2025-02-19 09:10:54.318072 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:10:54.318083 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-02-19 09:10:54.318094 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-19 09:10:54.318104 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-19 09:10:54.318121 | orchestrator | testbed-node-3 : ok=7  changed=3  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-19 09:10:54.318131 | orchestrator | testbed-node-4 : ok=7  changed=3  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-19 09:10:54.318147 | orchestrator | testbed-node-5 : ok=7  changed=3  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-19 09:10:57.362689 | orchestrator | 2025-02-19 09:10:57.362802 | orchestrator | 2025-02-19 09:10:57.362818 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:10:57.362832 | orchestrator | Wednesday 19 February 2025 09:10:52 +0000 (0:00:01.159) 0:02:42.905 **** 2025-02-19 09:10:57.362844 | orchestrator | =============================================================================== 2025-02-19 09:10:57.362856 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.08s 2025-02-19 09:10:57.362868 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 17.24s 2025-02-19 09:10:57.362896 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 16.78s 2025-02-19 09:10:57.362908 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.92s 2025-02-19 09:10:57.362921 | orchestrator | service-ks-register : cinder | Granting user roles --------------------- 10.04s 2025-02-19 09:10:57.362933 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 9.71s 2025-02-19 09:10:57.362944 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 8.10s 2025-02-19 09:10:57.362956 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 7.84s 2025-02-19 09:10:57.362968 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.69s 2025-02-19 09:10:57.362980 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 4.36s 2025-02-19 09:10:57.362991 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.28s 2025-02-19 09:10:57.363003 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.90s 2025-02-19 09:10:57.363015 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.82s 2025-02-19 09:10:57.363027 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.69s 2025-02-19 09:10:57.363039 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.52s 2025-02-19 09:10:57.363051 | orchestrator | cinder : include_tasks -------------------------------------------------- 3.49s 2025-02-19 09:10:57.363062 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.49s 2025-02-19 09:10:57.363074 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.22s 2025-02-19 09:10:57.363086 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.80s 2025-02-19 09:10:57.363098 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.77s 2025-02-19 09:10:57.363238 | orchestrator | 2025-02-19 09:10:57 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:10:57.363310 | orchestrator | 2025-02-19 09:10:57 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:10:57.364119 | orchestrator | 2025-02-19 09:10:57 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:10:57.364986 | orchestrator | 2025-02-19 09:10:57 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:10:57.365860 | orchestrator | 2025-02-19 09:10:57 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:00.426329 | orchestrator | 2025-02-19 09:10:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:00.426480 | orchestrator | 2025-02-19 09:11:00 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:00.427499 | orchestrator | 2025-02-19 09:11:00 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:00.427563 | orchestrator | 2025-02-19 09:11:00 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:00.428174 | orchestrator | 2025-02-19 09:11:00 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:00.428746 | orchestrator | 2025-02-19 09:11:00 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:03.468490 | orchestrator | 2025-02-19 09:11:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:03.468604 | orchestrator | 2025-02-19 09:11:03 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:03.470083 | orchestrator | 2025-02-19 09:11:03 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:03.470107 | orchestrator | 2025-02-19 09:11:03 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:03.472533 | orchestrator | 2025-02-19 09:11:03 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:03.473663 | orchestrator | 2025-02-19 09:11:03 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:06.518100 | orchestrator | 2025-02-19 09:11:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:06.518305 | orchestrator | 2025-02-19 09:11:06 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:06.520992 | orchestrator | 2025-02-19 09:11:06 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:06.521033 | orchestrator | 2025-02-19 09:11:06 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:06.521467 | orchestrator | 2025-02-19 09:11:06 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:06.522568 | orchestrator | 2025-02-19 09:11:06 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:09.600941 | orchestrator | 2025-02-19 09:11:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:09.601102 | orchestrator | 2025-02-19 09:11:09 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:09.601686 | orchestrator | 2025-02-19 09:11:09 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:09.601752 | orchestrator | 2025-02-19 09:11:09 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:09.602652 | orchestrator | 2025-02-19 09:11:09 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:09.603537 | orchestrator | 2025-02-19 09:11:09 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:09.603819 | orchestrator | 2025-02-19 09:11:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:12.634312 | orchestrator | 2025-02-19 09:11:12 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:12.634914 | orchestrator | 2025-02-19 09:11:12 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:12.634943 | orchestrator | 2025-02-19 09:11:12 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:12.634959 | orchestrator | 2025-02-19 09:11:12 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:12.635346 | orchestrator | 2025-02-19 09:11:12 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:12.635445 | orchestrator | 2025-02-19 09:11:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:15.671547 | orchestrator | 2025-02-19 09:11:15 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:15.671913 | orchestrator | 2025-02-19 09:11:15 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:15.672797 | orchestrator | 2025-02-19 09:11:15 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:15.673674 | orchestrator | 2025-02-19 09:11:15 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:15.674831 | orchestrator | 2025-02-19 09:11:15 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:18.707853 | orchestrator | 2025-02-19 09:11:15 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:18.708025 | orchestrator | 2025-02-19 09:11:18 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:18.708626 | orchestrator | 2025-02-19 09:11:18 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:18.708756 | orchestrator | 2025-02-19 09:11:18 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:18.709009 | orchestrator | 2025-02-19 09:11:18 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:18.709716 | orchestrator | 2025-02-19 09:11:18 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:21.749018 | orchestrator | 2025-02-19 09:11:18 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:21.749155 | orchestrator | 2025-02-19 09:11:21 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:21.749917 | orchestrator | 2025-02-19 09:11:21 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:21.751525 | orchestrator | 2025-02-19 09:11:21 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:21.754961 | orchestrator | 2025-02-19 09:11:21 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:21.756140 | orchestrator | 2025-02-19 09:11:21 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:24.815334 | orchestrator | 2025-02-19 09:11:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:24.815440 | orchestrator | 2025-02-19 09:11:24 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:24.816914 | orchestrator | 2025-02-19 09:11:24 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:24.819344 | orchestrator | 2025-02-19 09:11:24 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:24.820447 | orchestrator | 2025-02-19 09:11:24 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:24.822286 | orchestrator | 2025-02-19 09:11:24 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:27.864977 | orchestrator | 2025-02-19 09:11:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:27.865112 | orchestrator | 2025-02-19 09:11:27 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:27.865577 | orchestrator | 2025-02-19 09:11:27 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:27.865613 | orchestrator | 2025-02-19 09:11:27 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:27.866102 | orchestrator | 2025-02-19 09:11:27 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:27.866701 | orchestrator | 2025-02-19 09:11:27 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:27.869591 | orchestrator | 2025-02-19 09:11:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:30.902831 | orchestrator | 2025-02-19 09:11:30 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:33.950095 | orchestrator | 2025-02-19 09:11:30 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:33.950198 | orchestrator | 2025-02-19 09:11:30 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:33.950356 | orchestrator | 2025-02-19 09:11:30 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:33.950370 | orchestrator | 2025-02-19 09:11:30 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:33.950378 | orchestrator | 2025-02-19 09:11:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:33.950396 | orchestrator | 2025-02-19 09:11:33 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:33.952641 | orchestrator | 2025-02-19 09:11:33 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:33.952711 | orchestrator | 2025-02-19 09:11:33 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:33.953121 | orchestrator | 2025-02-19 09:11:33 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:33.953155 | orchestrator | 2025-02-19 09:11:33 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:37.014557 | orchestrator | 2025-02-19 09:11:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:37.014751 | orchestrator | 2025-02-19 09:11:37 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:37.015453 | orchestrator | 2025-02-19 09:11:37 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:37.015499 | orchestrator | 2025-02-19 09:11:37 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:37.016557 | orchestrator | 2025-02-19 09:11:37 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:37.017405 | orchestrator | 2025-02-19 09:11:37 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:40.079840 | orchestrator | 2025-02-19 09:11:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:40.079946 | orchestrator | 2025-02-19 09:11:40 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:40.080604 | orchestrator | 2025-02-19 09:11:40 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:40.081748 | orchestrator | 2025-02-19 09:11:40 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:40.082755 | orchestrator | 2025-02-19 09:11:40 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:40.084716 | orchestrator | 2025-02-19 09:11:40 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:43.164461 | orchestrator | 2025-02-19 09:11:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:43.164603 | orchestrator | 2025-02-19 09:11:43 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:43.168209 | orchestrator | 2025-02-19 09:11:43 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:43.170089 | orchestrator | 2025-02-19 09:11:43 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:43.170141 | orchestrator | 2025-02-19 09:11:43 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:43.176873 | orchestrator | 2025-02-19 09:11:43 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:46.239049 | orchestrator | 2025-02-19 09:11:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:46.239190 | orchestrator | 2025-02-19 09:11:46 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:46.242402 | orchestrator | 2025-02-19 09:11:46 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:46.243931 | orchestrator | 2025-02-19 09:11:46 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:46.247563 | orchestrator | 2025-02-19 09:11:46 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:49.313643 | orchestrator | 2025-02-19 09:11:46 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:49.313939 | orchestrator | 2025-02-19 09:11:46 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:49.313998 | orchestrator | 2025-02-19 09:11:49 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:49.315433 | orchestrator | 2025-02-19 09:11:49 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:49.315520 | orchestrator | 2025-02-19 09:11:49 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:49.316502 | orchestrator | 2025-02-19 09:11:49 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:49.317418 | orchestrator | 2025-02-19 09:11:49 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:52.369715 | orchestrator | 2025-02-19 09:11:49 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:52.369849 | orchestrator | 2025-02-19 09:11:52 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:52.370813 | orchestrator | 2025-02-19 09:11:52 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:52.372118 | orchestrator | 2025-02-19 09:11:52 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:52.373512 | orchestrator | 2025-02-19 09:11:52 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:52.374648 | orchestrator | 2025-02-19 09:11:52 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:55.422664 | orchestrator | 2025-02-19 09:11:52 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:55.422804 | orchestrator | 2025-02-19 09:11:55 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:55.423577 | orchestrator | 2025-02-19 09:11:55 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:55.424825 | orchestrator | 2025-02-19 09:11:55 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:55.427208 | orchestrator | 2025-02-19 09:11:55 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:55.428581 | orchestrator | 2025-02-19 09:11:55 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:58.496535 | orchestrator | 2025-02-19 09:11:55 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:11:58.496753 | orchestrator | 2025-02-19 09:11:58 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:11:58.497554 | orchestrator | 2025-02-19 09:11:58 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:11:58.502996 | orchestrator | 2025-02-19 09:11:58 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:11:58.503571 | orchestrator | 2025-02-19 09:11:58 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:11:58.503638 | orchestrator | 2025-02-19 09:11:58 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:11:58.503846 | orchestrator | 2025-02-19 09:11:58 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:01.553422 | orchestrator | 2025-02-19 09:12:01 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:01.555859 | orchestrator | 2025-02-19 09:12:01 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:01.557662 | orchestrator | 2025-02-19 09:12:01 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:01.557690 | orchestrator | 2025-02-19 09:12:01 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:01.558899 | orchestrator | 2025-02-19 09:12:01 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:01.561416 | orchestrator | 2025-02-19 09:12:01 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:04.636111 | orchestrator | 2025-02-19 09:12:04 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:04.636932 | orchestrator | 2025-02-19 09:12:04 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:04.636993 | orchestrator | 2025-02-19 09:12:04 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:04.639677 | orchestrator | 2025-02-19 09:12:04 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:04.641402 | orchestrator | 2025-02-19 09:12:04 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:04.641521 | orchestrator | 2025-02-19 09:12:04 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:07.707828 | orchestrator | 2025-02-19 09:12:07 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:07.712668 | orchestrator | 2025-02-19 09:12:07 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:07.718843 | orchestrator | 2025-02-19 09:12:07 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:07.718876 | orchestrator | 2025-02-19 09:12:07 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:07.718897 | orchestrator | 2025-02-19 09:12:07 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:10.785991 | orchestrator | 2025-02-19 09:12:07 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:10.786160 | orchestrator | 2025-02-19 09:12:10 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:10.788838 | orchestrator | 2025-02-19 09:12:10 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:10.788897 | orchestrator | 2025-02-19 09:12:10 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:10.790174 | orchestrator | 2025-02-19 09:12:10 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:10.793678 | orchestrator | 2025-02-19 09:12:10 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:13.840067 | orchestrator | 2025-02-19 09:12:10 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:13.840178 | orchestrator | 2025-02-19 09:12:13 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:13.843863 | orchestrator | 2025-02-19 09:12:13 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:13.848203 | orchestrator | 2025-02-19 09:12:13 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:13.849646 | orchestrator | 2025-02-19 09:12:13 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:13.849933 | orchestrator | 2025-02-19 09:12:13 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:16.893661 | orchestrator | 2025-02-19 09:12:13 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:16.893761 | orchestrator | 2025-02-19 09:12:16 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:16.895454 | orchestrator | 2025-02-19 09:12:16 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:16.896990 | orchestrator | 2025-02-19 09:12:16 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:16.897807 | orchestrator | 2025-02-19 09:12:16 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:16.898631 | orchestrator | 2025-02-19 09:12:16 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:19.956096 | orchestrator | 2025-02-19 09:12:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:19.956233 | orchestrator | 2025-02-19 09:12:19 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:19.956665 | orchestrator | 2025-02-19 09:12:19 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:19.957897 | orchestrator | 2025-02-19 09:12:19 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:19.959091 | orchestrator | 2025-02-19 09:12:19 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:19.960382 | orchestrator | 2025-02-19 09:12:19 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:19.960528 | orchestrator | 2025-02-19 09:12:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:23.048703 | orchestrator | 2025-02-19 09:12:23 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:23.050213 | orchestrator | 2025-02-19 09:12:23 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:23.053525 | orchestrator | 2025-02-19 09:12:23 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:23.057532 | orchestrator | 2025-02-19 09:12:23 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:23.062662 | orchestrator | 2025-02-19 09:12:23 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:26.104537 | orchestrator | 2025-02-19 09:12:23 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:26.104672 | orchestrator | 2025-02-19 09:12:26 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:26.105336 | orchestrator | 2025-02-19 09:12:26 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:26.106509 | orchestrator | 2025-02-19 09:12:26 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:26.107241 | orchestrator | 2025-02-19 09:12:26 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:26.110367 | orchestrator | 2025-02-19 09:12:26 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:29.163604 | orchestrator | 2025-02-19 09:12:26 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:29.163846 | orchestrator | 2025-02-19 09:12:29 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:29.167125 | orchestrator | 2025-02-19 09:12:29 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:29.167164 | orchestrator | 2025-02-19 09:12:29 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:29.171908 | orchestrator | 2025-02-19 09:12:29 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:29.173908 | orchestrator | 2025-02-19 09:12:29 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:32.306660 | orchestrator | 2025-02-19 09:12:29 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:32.306802 | orchestrator | 2025-02-19 09:12:32 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:32.307254 | orchestrator | 2025-02-19 09:12:32 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:32.307299 | orchestrator | 2025-02-19 09:12:32 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:32.308455 | orchestrator | 2025-02-19 09:12:32 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:32.309626 | orchestrator | 2025-02-19 09:12:32 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:35.405383 | orchestrator | 2025-02-19 09:12:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:35.405525 | orchestrator | 2025-02-19 09:12:35 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:35.406249 | orchestrator | 2025-02-19 09:12:35 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:35.407193 | orchestrator | 2025-02-19 09:12:35 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:35.407490 | orchestrator | 2025-02-19 09:12:35 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:35.409710 | orchestrator | 2025-02-19 09:12:35 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:38.449401 | orchestrator | 2025-02-19 09:12:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:38.449559 | orchestrator | 2025-02-19 09:12:38 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:38.456737 | orchestrator | 2025-02-19 09:12:38 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:38.456797 | orchestrator | 2025-02-19 09:12:38 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:38.457658 | orchestrator | 2025-02-19 09:12:38 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:38.459370 | orchestrator | 2025-02-19 09:12:38 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:41.516240 | orchestrator | 2025-02-19 09:12:38 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:41.516504 | orchestrator | 2025-02-19 09:12:41 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:41.517620 | orchestrator | 2025-02-19 09:12:41 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:41.518829 | orchestrator | 2025-02-19 09:12:41 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:41.520593 | orchestrator | 2025-02-19 09:12:41 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:41.521844 | orchestrator | 2025-02-19 09:12:41 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:44.612507 | orchestrator | 2025-02-19 09:12:41 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:44.612696 | orchestrator | 2025-02-19 09:12:44 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:44.613493 | orchestrator | 2025-02-19 09:12:44 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:44.614412 | orchestrator | 2025-02-19 09:12:44 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:44.615605 | orchestrator | 2025-02-19 09:12:44 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:44.618204 | orchestrator | 2025-02-19 09:12:44 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:47.687219 | orchestrator | 2025-02-19 09:12:44 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:47.687402 | orchestrator | 2025-02-19 09:12:47 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:47.687804 | orchestrator | 2025-02-19 09:12:47 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:47.689129 | orchestrator | 2025-02-19 09:12:47 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:47.691113 | orchestrator | 2025-02-19 09:12:47 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:47.692025 | orchestrator | 2025-02-19 09:12:47 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:50.756464 | orchestrator | 2025-02-19 09:12:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:50.756803 | orchestrator | 2025-02-19 09:12:50 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:50.757819 | orchestrator | 2025-02-19 09:12:50 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:50.757857 | orchestrator | 2025-02-19 09:12:50 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:50.757879 | orchestrator | 2025-02-19 09:12:50 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:50.764961 | orchestrator | 2025-02-19 09:12:50 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:53.817816 | orchestrator | 2025-02-19 09:12:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:53.817976 | orchestrator | 2025-02-19 09:12:53 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:53.819782 | orchestrator | 2025-02-19 09:12:53 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:53.821357 | orchestrator | 2025-02-19 09:12:53 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:53.822846 | orchestrator | 2025-02-19 09:12:53 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:53.824367 | orchestrator | 2025-02-19 09:12:53 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:53.825938 | orchestrator | 2025-02-19 09:12:53 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:56.892384 | orchestrator | 2025-02-19 09:12:56 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:56.892946 | orchestrator | 2025-02-19 09:12:56 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:56.892984 | orchestrator | 2025-02-19 09:12:56 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:56.893009 | orchestrator | 2025-02-19 09:12:56 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:56.893748 | orchestrator | 2025-02-19 09:12:56 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:12:59.940590 | orchestrator | 2025-02-19 09:12:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:12:59.940689 | orchestrator | 2025-02-19 09:12:59 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:12:59.941151 | orchestrator | 2025-02-19 09:12:59 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:12:59.941169 | orchestrator | 2025-02-19 09:12:59 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:12:59.943728 | orchestrator | 2025-02-19 09:12:59 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:12:59.944958 | orchestrator | 2025-02-19 09:12:59 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:02.991164 | orchestrator | 2025-02-19 09:12:59 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:02.991375 | orchestrator | 2025-02-19 09:13:02 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:02.993697 | orchestrator | 2025-02-19 09:13:02 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:02.993769 | orchestrator | 2025-02-19 09:13:02 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:02.994538 | orchestrator | 2025-02-19 09:13:02 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:02.997609 | orchestrator | 2025-02-19 09:13:02 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:06.044688 | orchestrator | 2025-02-19 09:13:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:06.044840 | orchestrator | 2025-02-19 09:13:06 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:06.045743 | orchestrator | 2025-02-19 09:13:06 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:06.046705 | orchestrator | 2025-02-19 09:13:06 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:06.047100 | orchestrator | 2025-02-19 09:13:06 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:06.047928 | orchestrator | 2025-02-19 09:13:06 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:09.109194 | orchestrator | 2025-02-19 09:13:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:09.109400 | orchestrator | 2025-02-19 09:13:09 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:09.110135 | orchestrator | 2025-02-19 09:13:09 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:09.110865 | orchestrator | 2025-02-19 09:13:09 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:09.112804 | orchestrator | 2025-02-19 09:13:09 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:09.113566 | orchestrator | 2025-02-19 09:13:09 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:12.178886 | orchestrator | 2025-02-19 09:13:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:12.179026 | orchestrator | 2025-02-19 09:13:12 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:12.180374 | orchestrator | 2025-02-19 09:13:12 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:12.182197 | orchestrator | 2025-02-19 09:13:12 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:12.185743 | orchestrator | 2025-02-19 09:13:12 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:12.186678 | orchestrator | 2025-02-19 09:13:12 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:12.187673 | orchestrator | 2025-02-19 09:13:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:15.253110 | orchestrator | 2025-02-19 09:13:15 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:15.254603 | orchestrator | 2025-02-19 09:13:15 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:15.254803 | orchestrator | 2025-02-19 09:13:15 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:15.255926 | orchestrator | 2025-02-19 09:13:15 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:15.256990 | orchestrator | 2025-02-19 09:13:15 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:18.349764 | orchestrator | 2025-02-19 09:13:15 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:18.349919 | orchestrator | 2025-02-19 09:13:18 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:18.355812 | orchestrator | 2025-02-19 09:13:18 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:18.355894 | orchestrator | 2025-02-19 09:13:18 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:21.406651 | orchestrator | 2025-02-19 09:13:18 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:21.406784 | orchestrator | 2025-02-19 09:13:18 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:21.406803 | orchestrator | 2025-02-19 09:13:18 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:21.406836 | orchestrator | 2025-02-19 09:13:21 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:21.407002 | orchestrator | 2025-02-19 09:13:21 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:21.408222 | orchestrator | 2025-02-19 09:13:21 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:21.409535 | orchestrator | 2025-02-19 09:13:21 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:21.411454 | orchestrator | 2025-02-19 09:13:21 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:24.462476 | orchestrator | 2025-02-19 09:13:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:24.462620 | orchestrator | 2025-02-19 09:13:24 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:24.465468 | orchestrator | 2025-02-19 09:13:24 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:24.468606 | orchestrator | 2025-02-19 09:13:24 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:24.471934 | orchestrator | 2025-02-19 09:13:24 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:24.472923 | orchestrator | 2025-02-19 09:13:24 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:27.527900 | orchestrator | 2025-02-19 09:13:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:27.528047 | orchestrator | 2025-02-19 09:13:27 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:27.528603 | orchestrator | 2025-02-19 09:13:27 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:27.529349 | orchestrator | 2025-02-19 09:13:27 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:27.530109 | orchestrator | 2025-02-19 09:13:27 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:27.532325 | orchestrator | 2025-02-19 09:13:27 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:30.593757 | orchestrator | 2025-02-19 09:13:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:30.593866 | orchestrator | 2025-02-19 09:13:30 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:30.594392 | orchestrator | 2025-02-19 09:13:30 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:30.602410 | orchestrator | 2025-02-19 09:13:30 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:30.603146 | orchestrator | 2025-02-19 09:13:30 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:30.604179 | orchestrator | 2025-02-19 09:13:30 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:30.604356 | orchestrator | 2025-02-19 09:13:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:33.655763 | orchestrator | 2025-02-19 09:13:33 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:33.658632 | orchestrator | 2025-02-19 09:13:33 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:33.660220 | orchestrator | 2025-02-19 09:13:33 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:33.663423 | orchestrator | 2025-02-19 09:13:33 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:33.666279 | orchestrator | 2025-02-19 09:13:33 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:36.759636 | orchestrator | 2025-02-19 09:13:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:36.759840 | orchestrator | 2025-02-19 09:13:36 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:36.759912 | orchestrator | 2025-02-19 09:13:36 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:36.760925 | orchestrator | 2025-02-19 09:13:36 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:36.761823 | orchestrator | 2025-02-19 09:13:36 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:36.762880 | orchestrator | 2025-02-19 09:13:36 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:36.762939 | orchestrator | 2025-02-19 09:13:36 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:39.833422 | orchestrator | 2025-02-19 09:13:39 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:39.836131 | orchestrator | 2025-02-19 09:13:39 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:39.837475 | orchestrator | 2025-02-19 09:13:39 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:39.838454 | orchestrator | 2025-02-19 09:13:39 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:39.839714 | orchestrator | 2025-02-19 09:13:39 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:42.891813 | orchestrator | 2025-02-19 09:13:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:42.891987 | orchestrator | 2025-02-19 09:13:42 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:42.893101 | orchestrator | 2025-02-19 09:13:42 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:42.894686 | orchestrator | 2025-02-19 09:13:42 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:42.898275 | orchestrator | 2025-02-19 09:13:42 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:42.898471 | orchestrator | 2025-02-19 09:13:42 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:45.965671 | orchestrator | 2025-02-19 09:13:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:45.965781 | orchestrator | 2025-02-19 09:13:45 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:45.966188 | orchestrator | 2025-02-19 09:13:45 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:45.966207 | orchestrator | 2025-02-19 09:13:45 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:45.968190 | orchestrator | 2025-02-19 09:13:45 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:49.010454 | orchestrator | 2025-02-19 09:13:45 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:49.010562 | orchestrator | 2025-02-19 09:13:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:49.010594 | orchestrator | 2025-02-19 09:13:49 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:49.010824 | orchestrator | 2025-02-19 09:13:49 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:49.011974 | orchestrator | 2025-02-19 09:13:49 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:49.013107 | orchestrator | 2025-02-19 09:13:49 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:49.015681 | orchestrator | 2025-02-19 09:13:49 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:52.078183 | orchestrator | 2025-02-19 09:13:49 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:52.078427 | orchestrator | 2025-02-19 09:13:52 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:52.078601 | orchestrator | 2025-02-19 09:13:52 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:52.083596 | orchestrator | 2025-02-19 09:13:52 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:52.085111 | orchestrator | 2025-02-19 09:13:52 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:52.085207 | orchestrator | 2025-02-19 09:13:52 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:55.147946 | orchestrator | 2025-02-19 09:13:52 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:55.148097 | orchestrator | 2025-02-19 09:13:55 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:55.151729 | orchestrator | 2025-02-19 09:13:55 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:55.152843 | orchestrator | 2025-02-19 09:13:55 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:55.156140 | orchestrator | 2025-02-19 09:13:55 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:55.157734 | orchestrator | 2025-02-19 09:13:55 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:13:58.206502 | orchestrator | 2025-02-19 09:13:55 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:13:58.206626 | orchestrator | 2025-02-19 09:13:58 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:13:58.206760 | orchestrator | 2025-02-19 09:13:58 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:13:58.206777 | orchestrator | 2025-02-19 09:13:58 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:13:58.206793 | orchestrator | 2025-02-19 09:13:58 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:13:58.211546 | orchestrator | 2025-02-19 09:13:58 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:14:01.261773 | orchestrator | 2025-02-19 09:13:58 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:01.261894 | orchestrator | 2025-02-19 09:14:01 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:01.262077 | orchestrator | 2025-02-19 09:14:01 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:01.263610 | orchestrator | 2025-02-19 09:14:01 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:01.263770 | orchestrator | 2025-02-19 09:14:01 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:01.264482 | orchestrator | 2025-02-19 09:14:01 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:14:04.306211 | orchestrator | 2025-02-19 09:14:01 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:04.306430 | orchestrator | 2025-02-19 09:14:04 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:04.307017 | orchestrator | 2025-02-19 09:14:04 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:04.307055 | orchestrator | 2025-02-19 09:14:04 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:04.307740 | orchestrator | 2025-02-19 09:14:04 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:04.310605 | orchestrator | 2025-02-19 09:14:04 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:14:04.310756 | orchestrator | 2025-02-19 09:14:04 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:07.341236 | orchestrator | 2025-02-19 09:14:07 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:07.341377 | orchestrator | 2025-02-19 09:14:07 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:07.342231 | orchestrator | 2025-02-19 09:14:07 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:07.342658 | orchestrator | 2025-02-19 09:14:07 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:07.343455 | orchestrator | 2025-02-19 09:14:07 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:14:10.403184 | orchestrator | 2025-02-19 09:14:07 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:10.403361 | orchestrator | 2025-02-19 09:14:10 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:10.403577 | orchestrator | 2025-02-19 09:14:10 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:10.403607 | orchestrator | 2025-02-19 09:14:10 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:10.405668 | orchestrator | 2025-02-19 09:14:10 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:10.405856 | orchestrator | 2025-02-19 09:14:10 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:14:13.450633 | orchestrator | 2025-02-19 09:14:10 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:13.450813 | orchestrator | 2025-02-19 09:14:13 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:13.451363 | orchestrator | 2025-02-19 09:14:13 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:13.451406 | orchestrator | 2025-02-19 09:14:13 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:13.453459 | orchestrator | 2025-02-19 09:14:13 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:13.454772 | orchestrator | 2025-02-19 09:14:13 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:14:16.501541 | orchestrator | 2025-02-19 09:14:13 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:16.501666 | orchestrator | 2025-02-19 09:14:16 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:16.502104 | orchestrator | 2025-02-19 09:14:16 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:16.502142 | orchestrator | 2025-02-19 09:14:16 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:16.504130 | orchestrator | 2025-02-19 09:14:16 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:16.505437 | orchestrator | 2025-02-19 09:14:16 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:14:19.542899 | orchestrator | 2025-02-19 09:14:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:19.543018 | orchestrator | 2025-02-19 09:14:19 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:19.543238 | orchestrator | 2025-02-19 09:14:19 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:19.543264 | orchestrator | 2025-02-19 09:14:19 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:19.544954 | orchestrator | 2025-02-19 09:14:19 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:19.545534 | orchestrator | 2025-02-19 09:14:19 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state STARTED 2025-02-19 09:14:19.545623 | orchestrator | 2025-02-19 09:14:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:22.577867 | orchestrator | 2025-02-19 09:14:22 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:22.579600 | orchestrator | 2025-02-19 09:14:22 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:22.580052 | orchestrator | 2025-02-19 09:14:22 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:22.580088 | orchestrator | 2025-02-19 09:14:22 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:22.580641 | orchestrator | 2025-02-19 09:14:22 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:22.581507 | orchestrator | 2025-02-19 09:14:22.583050 | orchestrator | 2025-02-19 09:14:22 | INFO  | Task 261e29ac-14a8-4902-8b06-e5cf19723f29 is in state SUCCESS 2025-02-19 09:14:22.583101 | orchestrator | 2025-02-19 09:14:22.583117 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:14:22.583132 | orchestrator | 2025-02-19 09:14:22.583147 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:14:22.583161 | orchestrator | Wednesday 19 February 2025 09:11:02 +0000 (0:00:00.310) 0:00:00.310 **** 2025-02-19 09:14:22.583175 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:14:22.583191 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:14:22.583205 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:14:22.583219 | orchestrator | 2025-02-19 09:14:22.583233 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:14:22.583247 | orchestrator | Wednesday 19 February 2025 09:11:02 +0000 (0:00:00.381) 0:00:00.692 **** 2025-02-19 09:14:22.583261 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-02-19 09:14:22.583275 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-02-19 09:14:22.583289 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-02-19 09:14:22.583336 | orchestrator | 2025-02-19 09:14:22.583351 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-02-19 09:14:22.583365 | orchestrator | 2025-02-19 09:14:22.583379 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-02-19 09:14:22.583392 | orchestrator | Wednesday 19 February 2025 09:11:03 +0000 (0:00:00.392) 0:00:01.084 **** 2025-02-19 09:14:22.583406 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:14:22.583422 | orchestrator | 2025-02-19 09:14:22.583436 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-02-19 09:14:22.583450 | orchestrator | Wednesday 19 February 2025 09:11:05 +0000 (0:00:01.721) 0:00:02.805 **** 2025-02-19 09:14:22.583547 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-02-19 09:14:22.583562 | orchestrator | 2025-02-19 09:14:22.583576 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-02-19 09:14:22.583590 | orchestrator | Wednesday 19 February 2025 09:11:11 +0000 (0:00:06.011) 0:00:08.816 **** 2025-02-19 09:14:22.583608 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-02-19 09:14:22.583631 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-02-19 09:14:22.583655 | orchestrator | 2025-02-19 09:14:22.583678 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-02-19 09:14:22.583724 | orchestrator | Wednesday 19 February 2025 09:11:17 +0000 (0:00:06.686) 0:00:15.503 **** 2025-02-19 09:14:22.583752 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-19 09:14:22.583777 | orchestrator | 2025-02-19 09:14:22.583803 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-02-19 09:14:22.583829 | orchestrator | Wednesday 19 February 2025 09:11:21 +0000 (0:00:03.918) 0:00:19.422 **** 2025-02-19 09:14:22.583854 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-19 09:14:22.583887 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-02-19 09:14:22.583903 | orchestrator | 2025-02-19 09:14:22.583919 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-02-19 09:14:22.583934 | orchestrator | Wednesday 19 February 2025 09:11:26 +0000 (0:00:04.899) 0:00:24.321 **** 2025-02-19 09:14:22.583950 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-19 09:14:22.583975 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-02-19 09:14:22.583991 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-02-19 09:14:22.584007 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-02-19 09:14:22.584023 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-02-19 09:14:22.584038 | orchestrator | 2025-02-19 09:14:22.584052 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-02-19 09:14:22.584066 | orchestrator | Wednesday 19 February 2025 09:11:45 +0000 (0:00:18.446) 0:00:42.768 **** 2025-02-19 09:14:22.584080 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-02-19 09:14:22.584093 | orchestrator | 2025-02-19 09:14:22.584107 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-02-19 09:14:22.584121 | orchestrator | Wednesday 19 February 2025 09:11:50 +0000 (0:00:05.895) 0:00:48.663 **** 2025-02-19 09:14:22.584139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.584174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.584207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.584232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584391 | orchestrator | 2025-02-19 09:14:22.584406 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-02-19 09:14:22.584420 | orchestrator | Wednesday 19 February 2025 09:11:54 +0000 (0:00:03.181) 0:00:51.845 **** 2025-02-19 09:14:22.584434 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-02-19 09:14:22.584447 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-02-19 09:14:22.584461 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-02-19 09:14:22.584483 | orchestrator | 2025-02-19 09:14:22.584498 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-02-19 09:14:22.584511 | orchestrator | Wednesday 19 February 2025 09:11:56 +0000 (0:00:02.675) 0:00:54.521 **** 2025-02-19 09:14:22.584525 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:14:22.584539 | orchestrator | 2025-02-19 09:14:22.584552 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-02-19 09:14:22.584566 | orchestrator | Wednesday 19 February 2025 09:11:57 +0000 (0:00:00.400) 0:00:54.921 **** 2025-02-19 09:14:22.584579 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:14:22.584593 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:14:22.584607 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:14:22.584621 | orchestrator | 2025-02-19 09:14:22.584635 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-02-19 09:14:22.584649 | orchestrator | Wednesday 19 February 2025 09:11:59 +0000 (0:00:02.107) 0:00:57.029 **** 2025-02-19 09:14:22.584663 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:14:22.584677 | orchestrator | 2025-02-19 09:14:22.584691 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-02-19 09:14:22.584705 | orchestrator | Wednesday 19 February 2025 09:12:00 +0000 (0:00:01.304) 0:00:58.334 **** 2025-02-19 09:14:22.584720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.584745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.584768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.584782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.584875 | orchestrator | 2025-02-19 09:14:22.584887 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-02-19 09:14:22.584900 | orchestrator | Wednesday 19 February 2025 09:12:09 +0000 (0:00:08.456) 0:01:06.790 **** 2025-02-19 09:14:22.584913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:14:22.584926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.584938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.584951 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:14:22.584971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:14:22.584991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.585004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.585016 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:14:22.585029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:14:22.585042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.585063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.585083 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:14:22.585095 | orchestrator | 2025-02-19 09:14:22.585108 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-02-19 09:14:22.585120 | orchestrator | Wednesday 19 February 2025 09:12:11 +0000 (0:00:02.402) 0:01:09.197 **** 2025-02-19 09:14:22.585133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:14:22.585146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.585159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.585171 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:14:22.585184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:14:22.585206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.585493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.585514 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:14:22.585528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:14:22.585541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.585554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.585567 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:14:22.585579 | orchestrator | 2025-02-19 09:14:22.585592 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-02-19 09:14:22.585605 | orchestrator | Wednesday 19 February 2025 09:12:14 +0000 (0:00:03.028) 0:01:12.225 **** 2025-02-19 09:14:22.585624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.585646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.585660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.585673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.585686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.585712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.585726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.585739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.585752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.585765 | orchestrator | 2025-02-19 09:14:22.585778 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-02-19 09:14:22.585790 | orchestrator | Wednesday 19 February 2025 09:12:21 +0000 (0:00:07.340) 0:01:19.566 **** 2025-02-19 09:14:22.585802 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:14:22.585815 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:14:22.585827 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:14:22.585839 | orchestrator | 2025-02-19 09:14:22.585851 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-02-19 09:14:22.585863 | orchestrator | Wednesday 19 February 2025 09:12:28 +0000 (0:00:06.357) 0:01:25.923 **** 2025-02-19 09:14:22.585875 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:14:22.585888 | orchestrator | 2025-02-19 09:14:22.585900 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-02-19 09:14:22.585912 | orchestrator | Wednesday 19 February 2025 09:12:34 +0000 (0:00:06.672) 0:01:32.596 **** 2025-02-19 09:14:22.585924 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:14:22.585936 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:14:22.585948 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:14:22.585981 | orchestrator | 2025-02-19 09:14:22.585993 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-02-19 09:14:22.586010 | orchestrator | Wednesday 19 February 2025 09:12:37 +0000 (0:00:02.619) 0:01:35.216 **** 2025-02-19 09:14:22.586072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.586094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.586110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.586125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586224 | orchestrator | 2025-02-19 09:14:22.586238 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-02-19 09:14:22.586253 | orchestrator | Wednesday 19 February 2025 09:13:03 +0000 (0:00:25.901) 0:02:01.117 **** 2025-02-19 09:14:22.586271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:14:22.586325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.586348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.586370 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:14:22.586402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:14:22.586427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.586450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.586480 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:14:22.586502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-19 09:14:22.586523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.586562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:14:22.586583 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:14:22.586604 | orchestrator | 2025-02-19 09:14:22.586626 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-02-19 09:14:22.586647 | orchestrator | Wednesday 19 February 2025 09:13:05 +0000 (0:00:02.018) 0:02:03.135 **** 2025-02-19 09:14:22.586668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.586690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.586725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-19 09:14:22.586756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:14:22.586842 | orchestrator | 2025-02-19 09:14:22.586854 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-02-19 09:14:22.586867 | orchestrator | Wednesday 19 February 2025 09:13:11 +0000 (0:00:06.239) 0:02:09.375 **** 2025-02-19 09:14:22.586880 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:14:22.586892 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:14:22.586905 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:14:22.586917 | orchestrator | 2025-02-19 09:14:22.586929 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-02-19 09:14:22.586941 | orchestrator | Wednesday 19 February 2025 09:13:12 +0000 (0:00:01.021) 0:02:10.397 **** 2025-02-19 09:14:22.586954 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:14:22.586966 | orchestrator | 2025-02-19 09:14:22.586979 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-02-19 09:14:22.586991 | orchestrator | Wednesday 19 February 2025 09:13:15 +0000 (0:00:03.113) 0:02:13.511 **** 2025-02-19 09:14:22.587003 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:14:22.587015 | orchestrator | 2025-02-19 09:14:22.587033 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-02-19 09:14:25.619885 | orchestrator | Wednesday 19 February 2025 09:13:19 +0000 (0:00:04.020) 0:02:17.531 **** 2025-02-19 09:14:25.620008 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:14:25.620031 | orchestrator | 2025-02-19 09:14:25.620047 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-02-19 09:14:25.620063 | orchestrator | Wednesday 19 February 2025 09:13:33 +0000 (0:00:13.449) 0:02:30.980 **** 2025-02-19 09:14:25.620078 | orchestrator | 2025-02-19 09:14:25.620093 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-02-19 09:14:25.620108 | orchestrator | Wednesday 19 February 2025 09:13:33 +0000 (0:00:00.378) 0:02:31.359 **** 2025-02-19 09:14:25.620123 | orchestrator | 2025-02-19 09:14:25.620138 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-02-19 09:14:25.620153 | orchestrator | Wednesday 19 February 2025 09:13:34 +0000 (0:00:00.426) 0:02:31.785 **** 2025-02-19 09:14:25.620168 | orchestrator | 2025-02-19 09:14:25.620183 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-02-19 09:14:25.620198 | orchestrator | Wednesday 19 February 2025 09:13:35 +0000 (0:00:01.357) 0:02:33.143 **** 2025-02-19 09:14:25.620212 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:14:25.620227 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:14:25.620271 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:14:25.620287 | orchestrator | 2025-02-19 09:14:25.620344 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-02-19 09:14:25.620415 | orchestrator | Wednesday 19 February 2025 09:13:52 +0000 (0:00:16.818) 0:02:49.962 **** 2025-02-19 09:14:25.620435 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:14:25.620451 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:14:25.620466 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:14:25.620482 | orchestrator | 2025-02-19 09:14:25.620498 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-02-19 09:14:25.620514 | orchestrator | Wednesday 19 February 2025 09:14:08 +0000 (0:00:16.130) 0:03:06.092 **** 2025-02-19 09:14:25.620529 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:14:25.620545 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:14:25.620560 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:14:25.620576 | orchestrator | 2025-02-19 09:14:25.620592 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:14:25.620609 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-19 09:14:25.620626 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 09:14:25.620642 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 09:14:25.620657 | orchestrator | 2025-02-19 09:14:25.620672 | orchestrator | 2025-02-19 09:14:25.620688 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:14:25.620704 | orchestrator | Wednesday 19 February 2025 09:14:18 +0000 (0:00:10.469) 0:03:16.562 **** 2025-02-19 09:14:25.620720 | orchestrator | =============================================================================== 2025-02-19 09:14:25.620735 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 25.90s 2025-02-19 09:14:25.620751 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.45s 2025-02-19 09:14:25.620766 | orchestrator | barbican : Restart barbican-api container ------------------------------ 16.82s 2025-02-19 09:14:25.620796 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 16.13s 2025-02-19 09:14:25.620811 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.45s 2025-02-19 09:14:25.620825 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.47s 2025-02-19 09:14:25.620839 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 8.46s 2025-02-19 09:14:25.620853 | orchestrator | barbican : Copying over config.json files for services ------------------ 7.34s 2025-02-19 09:14:25.620867 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.69s 2025-02-19 09:14:25.620881 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 6.67s 2025-02-19 09:14:25.620895 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 6.36s 2025-02-19 09:14:25.620909 | orchestrator | barbican : Check barbican containers ------------------------------------ 6.24s 2025-02-19 09:14:25.620923 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 6.01s 2025-02-19 09:14:25.620936 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.90s 2025-02-19 09:14:25.620950 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.90s 2025-02-19 09:14:25.620964 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 4.02s 2025-02-19 09:14:25.620978 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.92s 2025-02-19 09:14:25.620992 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.18s 2025-02-19 09:14:25.621006 | orchestrator | barbican : Creating barbican database ----------------------------------- 3.11s 2025-02-19 09:14:25.621029 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 3.03s 2025-02-19 09:14:25.621044 | orchestrator | 2025-02-19 09:14:22 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:25.621075 | orchestrator | 2025-02-19 09:14:25 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:25.621175 | orchestrator | 2025-02-19 09:14:25 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:25.621850 | orchestrator | 2025-02-19 09:14:25 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:25.623478 | orchestrator | 2025-02-19 09:14:25 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:25.625196 | orchestrator | 2025-02-19 09:14:25 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:28.670534 | orchestrator | 2025-02-19 09:14:25 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:28.670674 | orchestrator | 2025-02-19 09:14:28 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:28.671580 | orchestrator | 2025-02-19 09:14:28 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:28.677374 | orchestrator | 2025-02-19 09:14:28 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:28.685676 | orchestrator | 2025-02-19 09:14:28 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:28.688402 | orchestrator | 2025-02-19 09:14:28 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:31.756963 | orchestrator | 2025-02-19 09:14:28 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:31.757107 | orchestrator | 2025-02-19 09:14:31 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:31.759245 | orchestrator | 2025-02-19 09:14:31 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:31.759823 | orchestrator | 2025-02-19 09:14:31 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:31.760841 | orchestrator | 2025-02-19 09:14:31 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:31.766998 | orchestrator | 2025-02-19 09:14:31 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:34.795626 | orchestrator | 2025-02-19 09:14:31 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:34.795769 | orchestrator | 2025-02-19 09:14:34 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:34.796394 | orchestrator | 2025-02-19 09:14:34 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:34.796851 | orchestrator | 2025-02-19 09:14:34 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:34.799842 | orchestrator | 2025-02-19 09:14:34 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:34.800101 | orchestrator | 2025-02-19 09:14:34 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:34.800313 | orchestrator | 2025-02-19 09:14:34 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:37.856898 | orchestrator | 2025-02-19 09:14:37 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:37.857370 | orchestrator | 2025-02-19 09:14:37 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:37.858426 | orchestrator | 2025-02-19 09:14:37 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:37.859335 | orchestrator | 2025-02-19 09:14:37 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:37.861847 | orchestrator | 2025-02-19 09:14:37 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:40.916037 | orchestrator | 2025-02-19 09:14:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:40.916211 | orchestrator | 2025-02-19 09:14:40 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:40.916945 | orchestrator | 2025-02-19 09:14:40 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:40.918356 | orchestrator | 2025-02-19 09:14:40 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:40.919840 | orchestrator | 2025-02-19 09:14:40 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:40.921599 | orchestrator | 2025-02-19 09:14:40 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:43.976062 | orchestrator | 2025-02-19 09:14:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:43.976202 | orchestrator | 2025-02-19 09:14:43 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:43.977961 | orchestrator | 2025-02-19 09:14:43 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:43.978838 | orchestrator | 2025-02-19 09:14:43 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:43.980520 | orchestrator | 2025-02-19 09:14:43 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:43.981469 | orchestrator | 2025-02-19 09:14:43 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:47.025766 | orchestrator | 2025-02-19 09:14:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:47.025928 | orchestrator | 2025-02-19 09:14:47 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:47.027799 | orchestrator | 2025-02-19 09:14:47 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:47.030907 | orchestrator | 2025-02-19 09:14:47 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:47.034184 | orchestrator | 2025-02-19 09:14:47 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:47.034225 | orchestrator | 2025-02-19 09:14:47 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:50.088497 | orchestrator | 2025-02-19 09:14:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:50.088619 | orchestrator | 2025-02-19 09:14:50 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:50.092859 | orchestrator | 2025-02-19 09:14:50 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:50.094286 | orchestrator | 2025-02-19 09:14:50 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:50.095893 | orchestrator | 2025-02-19 09:14:50 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:50.097512 | orchestrator | 2025-02-19 09:14:50 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:50.097592 | orchestrator | 2025-02-19 09:14:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:53.148072 | orchestrator | 2025-02-19 09:14:53 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:53.150211 | orchestrator | 2025-02-19 09:14:53 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:53.153677 | orchestrator | 2025-02-19 09:14:53 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:53.156036 | orchestrator | 2025-02-19 09:14:53 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:53.158224 | orchestrator | 2025-02-19 09:14:53 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:56.209742 | orchestrator | 2025-02-19 09:14:53 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:56.209886 | orchestrator | 2025-02-19 09:14:56 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:56.210231 | orchestrator | 2025-02-19 09:14:56 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:56.211595 | orchestrator | 2025-02-19 09:14:56 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:56.212911 | orchestrator | 2025-02-19 09:14:56 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:56.214155 | orchestrator | 2025-02-19 09:14:56 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:14:59.266080 | orchestrator | 2025-02-19 09:14:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:14:59.266223 | orchestrator | 2025-02-19 09:14:59 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:14:59.266573 | orchestrator | 2025-02-19 09:14:59 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:14:59.271921 | orchestrator | 2025-02-19 09:14:59 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:14:59.273418 | orchestrator | 2025-02-19 09:14:59 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:14:59.274817 | orchestrator | 2025-02-19 09:14:59 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:15:02.313442 | orchestrator | 2025-02-19 09:14:59 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:02.313583 | orchestrator | 2025-02-19 09:15:02 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:02.313856 | orchestrator | 2025-02-19 09:15:02 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:02.314934 | orchestrator | 2025-02-19 09:15:02 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:02.315624 | orchestrator | 2025-02-19 09:15:02 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:02.316447 | orchestrator | 2025-02-19 09:15:02 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state STARTED 2025-02-19 09:15:05.351332 | orchestrator | 2025-02-19 09:15:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:05.351450 | orchestrator | 2025-02-19 09:15:05 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:05.354866 | orchestrator | 2025-02-19 09:15:05 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:05.355731 | orchestrator | 2025-02-19 09:15:05 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:05.356180 | orchestrator | 2025-02-19 09:15:05 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:05.359016 | orchestrator | 2025-02-19 09:15:05 | INFO  | Task 4d49c698-7509-4061-9298-6ff0d14d3b36 is in state SUCCESS 2025-02-19 09:15:05.360896 | orchestrator | 2025-02-19 09:15:05.361034 | orchestrator | 2025-02-19 09:15:05.361061 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:15:05.361082 | orchestrator | 2025-02-19 09:15:05.361101 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:15:05.361122 | orchestrator | Wednesday 19 February 2025 09:07:53 +0000 (0:00:00.436) 0:00:00.436 **** 2025-02-19 09:15:05.361141 | orchestrator | ok: [testbed-manager] 2025-02-19 09:15:05.361161 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:05.361179 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:05.361199 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:05.361218 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:15:05.361237 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:15:05.361255 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:15:05.361275 | orchestrator | 2025-02-19 09:15:05.361295 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:15:05.361343 | orchestrator | Wednesday 19 February 2025 09:07:55 +0000 (0:00:01.832) 0:00:02.269 **** 2025-02-19 09:15:05.361363 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-02-19 09:15:05.361381 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-02-19 09:15:05.361402 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-02-19 09:15:05.361446 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-02-19 09:15:05.361486 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-02-19 09:15:05.361504 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-02-19 09:15:05.361585 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-02-19 09:15:05.361603 | orchestrator | 2025-02-19 09:15:05.361620 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-02-19 09:15:05.361636 | orchestrator | 2025-02-19 09:15:05.361652 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-02-19 09:15:05.361669 | orchestrator | Wednesday 19 February 2025 09:07:56 +0000 (0:00:01.221) 0:00:03.490 **** 2025-02-19 09:15:05.361685 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:15:05.361718 | orchestrator | 2025-02-19 09:15:05.361750 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-02-19 09:15:05.361767 | orchestrator | Wednesday 19 February 2025 09:07:58 +0000 (0:00:02.306) 0:00:05.797 **** 2025-02-19 09:15:05.361786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.361809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.361845 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-19 09:15:05.361881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.361899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.361917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.361932 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.361948 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.361966 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.361992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.362064 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.362086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.362139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.362155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.362180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.362204 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.362222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.362239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.362256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.362276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.362326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.362383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.362398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.362414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.362430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362475 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-19 09:15:05.362492 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.362507 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.362538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.362577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.362592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.362634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.362650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.362673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.362716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.362733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.362750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.362768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.362793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.362808 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.362831 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.362847 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.362982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.363009 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.363028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.363106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.363123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.363134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.363145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.363164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.363175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.363195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.363207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.363223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.363235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.363256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.363274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.363286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.363296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.363375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.363388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.363399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.363415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.363426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.363436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.363457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.363468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.363478 | orchestrator | 2025-02-19 09:15:05.363489 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-02-19 09:15:05.363500 | orchestrator | Wednesday 19 February 2025 09:08:04 +0000 (0:00:05.365) 0:00:11.163 **** 2025-02-19 09:15:05.363511 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:15:05.363522 | orchestrator | 2025-02-19 09:15:05.363532 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-02-19 09:15:05.363547 | orchestrator | Wednesday 19 February 2025 09:08:06 +0000 (0:00:02.305) 0:00:13.469 **** 2025-02-19 09:15:05.363558 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-19 09:15:05.363574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.363584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.363593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.363609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.363618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.363627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.363640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.363649 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.363663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.363672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.363681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.363693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.363702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.363715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.363730 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.363744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.363753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.363762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.363771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.363780 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.363797 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-19 09:15:05.363812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.364021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.364035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.364044 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.364053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.364062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.364082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.364091 | orchestrator | 2025-02-19 09:15:05.364105 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-02-19 09:15:05.364118 | orchestrator | Wednesday 19 February 2025 09:08:14 +0000 (0:00:07.949) 0:00:21.418 **** 2025-02-19 09:15:05.364127 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.364142 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364152 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364161 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.364180 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364190 | orchestrator | skipping: [testbed-manager] 2025-02-19 09:15:05.364199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364281 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.364296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364347 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.364356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364400 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.364409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364453 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.364462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364488 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.364533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364573 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.364582 | orchestrator | 2025-02-19 09:15:05.364590 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-02-19 09:15:05.364599 | orchestrator | Wednesday 19 February 2025 09:08:16 +0000 (0:00:02.604) 0:00:24.022 **** 2025-02-19 09:15:05.364621 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.364631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364649 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364720 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364785 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.364798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364815 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.364825 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364848 | orchestrator | skipping: [testbed-manager] 2025-02-19 09:15:05.364857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364901 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.364912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.364957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.364972 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.364981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.364990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.365003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.365167 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.365180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.365199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.365208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.365218 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.365226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-02-19 09:15:05.365245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.365254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.365263 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.365272 | orchestrator | 2025-02-19 09:15:05.365280 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-02-19 09:15:05.365289 | orchestrator | Wednesday 19 February 2025 09:08:20 +0000 (0:00:03.650) 0:00:27.673 **** 2025-02-19 09:15:05.365325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.365358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.365369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.365378 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-19 09:15:05.365396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.365433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.365449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.365467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.365477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.365491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.365500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.365509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365580 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.365590 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365608 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.365631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.365658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.365672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.365681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.365766 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.365784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.365799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.365810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365830 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.365845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.365864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.365875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.365887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.365920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.365936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.365946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.365959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.365969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.365978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.366001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.366010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.366063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.366072 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-19 09:15:05.366087 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.366096 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.366145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.366193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.366213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.366223 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.366238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image':2025-02-19 09:15:05 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:05.367124 | orchestrator | 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.367251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.367396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.367434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.367460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.367485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.367528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.367589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.367618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.367643 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.367666 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.367690 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.367726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.367765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.367792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.367835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.367862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.367887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.367912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.367937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.367972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.368010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.368054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.368079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.368103 | orchestrator | 2025-02-19 09:15:05.368127 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-02-19 09:15:05.368149 | orchestrator | Wednesday 19 February 2025 09:08:28 +0000 (0:00:07.587) 0:00:35.260 **** 2025-02-19 09:15:05.368172 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-19 09:15:05.368197 | orchestrator | 2025-02-19 09:15:05.368221 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-02-19 09:15:05.368244 | orchestrator | Wednesday 19 February 2025 09:08:29 +0000 (0:00:01.043) 0:00:36.304 **** 2025-02-19 09:15:05.368268 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368296 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368354 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368394 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368410 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368424 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368456 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324266, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368471 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324266, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368486 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324266, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368501 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324266, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368529 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324266, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368544 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.368559 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368584 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324266, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368600 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368624 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368649 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368693 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368720 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339779, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368744 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368776 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339779, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368792 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339779, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368807 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339779, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368842 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339779, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368886 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324071, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368912 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324071, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.368953 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1324266, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.368981 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339779, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369008 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324071, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369028 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339782, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369052 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324071, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369075 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324071, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369102 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339782, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369118 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339782, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369132 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324071, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369147 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339782, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369161 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339782, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369183 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324045, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369216 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324045, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369232 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324045, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369247 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324045, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369261 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339782, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369276 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324270, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369291 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324045, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369381 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1339775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.369434 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324270, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369463 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324270, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369478 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369494 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324270, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369508 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324045, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369531 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369552 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324270, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369572 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369594 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324270, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369609 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329132, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369624 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329132, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369638 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369662 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369688 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329132, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369704 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369727 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324279, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369744 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329132, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369769 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324279, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369793 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1339779, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.369828 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329132, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369869 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329132, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369894 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324279, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369931 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369959 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.369986 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324279, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370055 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324279, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370090 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324279, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370107 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370122 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1323989, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370159 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370175 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1323989, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370190 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370213 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370240 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1323989, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370256 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339768, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370271 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1323989, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370294 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1323989, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370372 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339768, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370390 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1323989, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370427 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1324071, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.370443 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339768, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370458 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339768, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370473 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324083, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370496 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324083, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370511 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339768, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370536 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339768, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370559 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324083, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370574 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324083, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370589 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329130, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370605 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329130, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370638 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324083, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370654 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324083, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370687 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329130, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370703 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1339782, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.370718 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329130, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370732 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1323876, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370747 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1323876, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370769 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329130, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370785 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1323876, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370818 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329130, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370833 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1323876, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370848 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1329054, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370863 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.370877 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1329054, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370890 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.370903 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1329054, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370917 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.370940 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1323876, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370970 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1323876, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370984 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1329054, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.370997 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.371010 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1329054, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.371023 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.371036 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1329054, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-02-19 09:15:05.371049 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.371061 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1324045, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371074 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1324270, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371107 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328775, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371138 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1329132, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371151 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1324279, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371164 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339781, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371177 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1323989, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371190 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1339768, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371204 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1324083, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6466274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371238 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1329130, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6486275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371253 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1323876, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6456275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371266 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1329054, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6476274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-19 09:15:05.371279 | orchestrator | 2025-02-19 09:15:05.371292 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-02-19 09:15:05.371326 | orchestrator | Wednesday 19 February 2025 09:10:02 +0000 (0:01:32.978) 0:02:09.282 **** 2025-02-19 09:15:05.371341 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-19 09:15:05.371364 | orchestrator | 2025-02-19 09:15:05.371384 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-02-19 09:15:05.371407 | orchestrator | Wednesday 19 February 2025 09:10:02 +0000 (0:00:00.645) 0:02:09.928 **** 2025-02-19 09:15:05.371428 | orchestrator | [WARNING]: Skipped 2025-02-19 09:15:05.371450 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.371473 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-02-19 09:15:05.371496 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.371518 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-02-19 09:15:05.371541 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-19 09:15:05.371562 | orchestrator | [WARNING]: Skipped 2025-02-19 09:15:05.371584 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.371607 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-02-19 09:15:05.371631 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.371653 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-02-19 09:15:05.371676 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:15:05.371701 | orchestrator | [WARNING]: Skipped 2025-02-19 09:15:05.371724 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.371747 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-02-19 09:15:05.371770 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.371794 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-02-19 09:15:05.371817 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-19 09:15:05.371842 | orchestrator | [WARNING]: Skipped 2025-02-19 09:15:05.371877 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.371901 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-02-19 09:15:05.371923 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.371945 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-02-19 09:15:05.371967 | orchestrator | [WARNING]: Skipped 2025-02-19 09:15:05.371988 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.372011 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-02-19 09:15:05.372035 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.372059 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-02-19 09:15:05.372082 | orchestrator | [WARNING]: Skipped 2025-02-19 09:15:05.372107 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.372131 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-02-19 09:15:05.372155 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.372188 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-02-19 09:15:05.372214 | orchestrator | [WARNING]: Skipped 2025-02-19 09:15:05.372244 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.372264 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-02-19 09:15:05.372284 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-02-19 09:15:05.372330 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-02-19 09:15:05.372353 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-19 09:15:05.372376 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-19 09:15:05.372401 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-19 09:15:05.372423 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-19 09:15:05.372445 | orchestrator | 2025-02-19 09:15:05.372466 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-02-19 09:15:05.372488 | orchestrator | Wednesday 19 February 2025 09:10:04 +0000 (0:00:02.033) 0:02:11.961 **** 2025-02-19 09:15:05.372509 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-19 09:15:05.372529 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.372549 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-19 09:15:05.372570 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-19 09:15:05.372591 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.372612 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.372642 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-19 09:15:05.372664 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.372684 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-19 09:15:05.372706 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.372728 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-02-19 09:15:05.372750 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.372771 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-02-19 09:15:05.372793 | orchestrator | 2025-02-19 09:15:05.372814 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-02-19 09:15:05.372835 | orchestrator | Wednesday 19 February 2025 09:10:32 +0000 (0:00:27.741) 0:02:39.703 **** 2025-02-19 09:15:05.372856 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-19 09:15:05.372890 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.372913 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-19 09:15:05.372934 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.372956 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-19 09:15:05.372976 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.372998 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-19 09:15:05.373020 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.373040 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-19 09:15:05.373062 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.373083 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-02-19 09:15:05.373104 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.373125 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-02-19 09:15:05.373147 | orchestrator | 2025-02-19 09:15:05.373174 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-02-19 09:15:05.373197 | orchestrator | Wednesday 19 February 2025 09:10:44 +0000 (0:00:12.103) 0:02:51.806 **** 2025-02-19 09:15:05.373219 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-19 09:15:05.373241 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.373262 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-19 09:15:05.373283 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.373324 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-19 09:15:05.373352 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.373374 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-19 09:15:05.373397 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.373421 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-19 09:15:05.373445 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.373468 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-02-19 09:15:05.373492 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.373526 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-02-19 09:15:05.373552 | orchestrator | 2025-02-19 09:15:05.373574 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-02-19 09:15:05.373595 | orchestrator | Wednesday 19 February 2025 09:10:52 +0000 (0:00:08.140) 0:02:59.947 **** 2025-02-19 09:15:05.373619 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-19 09:15:05.373642 | orchestrator | 2025-02-19 09:15:05.373665 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-02-19 09:15:05.373691 | orchestrator | Wednesday 19 February 2025 09:10:54 +0000 (0:00:01.449) 0:03:01.397 **** 2025-02-19 09:15:05.373713 | orchestrator | skipping: [testbed-manager] 2025-02-19 09:15:05.373736 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.373760 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.373785 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.373807 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.373829 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.373852 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.373889 | orchestrator | 2025-02-19 09:15:05.373911 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-02-19 09:15:05.373935 | orchestrator | Wednesday 19 February 2025 09:10:55 +0000 (0:00:01.297) 0:03:02.694 **** 2025-02-19 09:15:05.373957 | orchestrator | skipping: [testbed-manager] 2025-02-19 09:15:05.373982 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.374006 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.374083 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.374110 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:05.374134 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:05.374157 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:05.374180 | orchestrator | 2025-02-19 09:15:05.374203 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-02-19 09:15:05.374226 | orchestrator | Wednesday 19 February 2025 09:11:03 +0000 (0:00:07.589) 0:03:10.283 **** 2025-02-19 09:15:05.374250 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-19 09:15:05.374272 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.374294 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-19 09:15:05.374382 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.374405 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-19 09:15:05.374426 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.374446 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-19 09:15:05.374468 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.374489 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-19 09:15:05.374510 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.374532 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-19 09:15:05.374553 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.374574 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-02-19 09:15:05.374595 | orchestrator | skipping: [testbed-manager] 2025-02-19 09:15:05.374616 | orchestrator | 2025-02-19 09:15:05.374637 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-02-19 09:15:05.374658 | orchestrator | Wednesday 19 February 2025 09:11:10 +0000 (0:00:07.571) 0:03:17.855 **** 2025-02-19 09:15:05.374679 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-19 09:15:05.374700 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.374721 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-19 09:15:05.374742 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.374764 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-19 09:15:05.374785 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.374806 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-19 09:15:05.374827 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.374848 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-19 09:15:05.374868 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.374885 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-02-19 09:15:05.374901 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.374918 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-02-19 09:15:05.374934 | orchestrator | 2025-02-19 09:15:05.374951 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-02-19 09:15:05.374978 | orchestrator | Wednesday 19 February 2025 09:11:16 +0000 (0:00:06.151) 0:03:24.006 **** 2025-02-19 09:15:05.374995 | orchestrator | [WARNING]: Skipped 2025-02-19 09:15:05.375013 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-02-19 09:15:05.375030 | orchestrator | due to this access issue: 2025-02-19 09:15:05.375047 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-02-19 09:15:05.375064 | orchestrator | not a directory 2025-02-19 09:15:05.375081 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-19 09:15:05.375105 | orchestrator | 2025-02-19 09:15:05.375123 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-02-19 09:15:05.375149 | orchestrator | Wednesday 19 February 2025 09:11:20 +0000 (0:00:03.157) 0:03:27.163 **** 2025-02-19 09:15:05.375166 | orchestrator | skipping: [testbed-manager] 2025-02-19 09:15:05.375183 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.375200 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.375217 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.375235 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.375252 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.375269 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.375285 | orchestrator | 2025-02-19 09:15:05.375321 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-02-19 09:15:05.375341 | orchestrator | Wednesday 19 February 2025 09:11:21 +0000 (0:00:01.710) 0:03:28.873 **** 2025-02-19 09:15:05.375358 | orchestrator | skipping: [testbed-manager] 2025-02-19 09:15:05.375375 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.375391 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.375410 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.375427 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.375443 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.375459 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.375477 | orchestrator | 2025-02-19 09:15:05.375494 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-02-19 09:15:05.375511 | orchestrator | Wednesday 19 February 2025 09:11:24 +0000 (0:00:02.863) 0:03:31.736 **** 2025-02-19 09:15:05.375528 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-19 09:15:05.375545 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.375562 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-19 09:15:05.375580 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.375596 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-19 09:15:05.375612 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.375628 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-19 09:15:05.375644 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.375660 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-19 09:15:05.375678 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.375695 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-19 09:15:05.375712 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.375729 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-02-19 09:15:05.375746 | orchestrator | skipping: [testbed-manager] 2025-02-19 09:15:05.375763 | orchestrator | 2025-02-19 09:15:05.375779 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-02-19 09:15:05.375797 | orchestrator | Wednesday 19 February 2025 09:11:30 +0000 (0:00:05.675) 0:03:37.412 **** 2025-02-19 09:15:05.375815 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-19 09:15:05.375841 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:05.375858 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-19 09:15:05.375876 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:05.375893 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-19 09:15:05.375909 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:05.375926 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-19 09:15:05.375943 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:05.375965 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-19 09:15:05.375983 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:05.375999 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-19 09:15:05.376017 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:05.376034 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-02-19 09:15:05.376050 | orchestrator | skipping: [testbed-manager] 2025-02-19 09:15:05.376067 | orchestrator | 2025-02-19 09:15:05.376084 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-02-19 09:15:05.376101 | orchestrator | Wednesday 19 February 2025 09:11:36 +0000 (0:00:06.095) 0:03:43.507 **** 2025-02-19 09:15:05.376129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.376149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.376167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.376223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.376243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.376261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.376287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.376326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.376346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376409 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-02-19 09:15:05.376422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.376433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.376462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-02-19 09:15:05.376484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.376499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.376520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.376533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.376551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.376591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.376610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.376622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.376638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.376676 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.376687 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376698 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376708 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-02-19 09:15:05.376719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.376764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.376781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376792 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.376810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.376821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.376842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.376859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.376883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.376894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.376916 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.376926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.376951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.376968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.376978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.376989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.377005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.377026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.377042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-02-19 09:15:05.377053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.377063 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-02-19 09:15:05.377074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.377098 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-02-19 09:15:05.377114 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-02-19 09:15:05.377125 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.377136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.377147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.377158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.377168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.377184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.377208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.377219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.377230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.377240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.377251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.377261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.377278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.377299 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-19 09:15:05.377369 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-02-19 09:15:05.377381 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:15:05.377392 | orchestrator | 2025-02-19 09:15:05.377402 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-02-19 09:15:05.377413 | orchestrator | Wednesday 19 February 2025 09:11:46 +0000 (0:00:10.257) 0:03:53.765 **** 2025-02-19 09:15:05.377424 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-02-19 09:15:05.377434 | orchestrator | 2025-02-19 09:15:05.377445 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-19 09:15:05.377455 | orchestrator | Wednesday 19 February 2025 09:11:51 +0000 (0:00:04.800) 0:03:58.565 **** 2025-02-19 09:15:05.377465 | orchestrator | 2025-02-19 09:15:05.377473 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-19 09:15:05.377482 | orchestrator | Wednesday 19 February 2025 09:11:52 +0000 (0:00:00.623) 0:03:59.189 **** 2025-02-19 09:15:05.377491 | orchestrator | 2025-02-19 09:15:05.377503 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-19 09:15:05.377512 | orchestrator | Wednesday 19 February 2025 09:11:52 +0000 (0:00:00.177) 0:03:59.367 **** 2025-02-19 09:15:05.377520 | orchestrator | 2025-02-19 09:15:05.377529 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-19 09:15:05.377537 | orchestrator | Wednesday 19 February 2025 09:11:52 +0000 (0:00:00.138) 0:03:59.505 **** 2025-02-19 09:15:05.377546 | orchestrator | 2025-02-19 09:15:05.377555 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-19 09:15:05.377563 | orchestrator | Wednesday 19 February 2025 09:11:52 +0000 (0:00:00.128) 0:03:59.633 **** 2025-02-19 09:15:05.377572 | orchestrator | 2025-02-19 09:15:05.377580 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-19 09:15:05.377589 | orchestrator | Wednesday 19 February 2025 09:11:52 +0000 (0:00:00.347) 0:03:59.981 **** 2025-02-19 09:15:05.377598 | orchestrator | 2025-02-19 09:15:05.377606 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-02-19 09:15:05.377615 | orchestrator | Wednesday 19 February 2025 09:11:52 +0000 (0:00:00.068) 0:04:00.050 **** 2025-02-19 09:15:05.377623 | orchestrator | 2025-02-19 09:15:05.377632 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-02-19 09:15:05.377640 | orchestrator | Wednesday 19 February 2025 09:11:53 +0000 (0:00:00.106) 0:04:00.156 **** 2025-02-19 09:15:05.377654 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:05.377663 | orchestrator | 2025-02-19 09:15:05.377672 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-02-19 09:15:05.377680 | orchestrator | Wednesday 19 February 2025 09:12:22 +0000 (0:00:29.230) 0:04:29.387 **** 2025-02-19 09:15:05.377689 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:15:05.377697 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:05.377706 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:05.377715 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:15:05.377723 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:15:05.377732 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:05.377740 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:05.377749 | orchestrator | 2025-02-19 09:15:05.377757 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-02-19 09:15:05.377766 | orchestrator | Wednesday 19 February 2025 09:12:55 +0000 (0:00:33.404) 0:05:02.792 **** 2025-02-19 09:15:05.377775 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:05.377783 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:05.377792 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:05.377800 | orchestrator | 2025-02-19 09:15:05.377809 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-02-19 09:15:05.377817 | orchestrator | Wednesday 19 February 2025 09:13:13 +0000 (0:00:17.948) 0:05:20.741 **** 2025-02-19 09:15:05.377826 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:05.377834 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:05.377843 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:05.377852 | orchestrator | 2025-02-19 09:15:05.377860 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-02-19 09:15:05.377869 | orchestrator | Wednesday 19 February 2025 09:13:31 +0000 (0:00:18.265) 0:05:39.006 **** 2025-02-19 09:15:05.377877 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:05.377886 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:15:05.377894 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:05.377903 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:15:05.377915 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:08.414834 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:15:08.414958 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:08.414977 | orchestrator | 2025-02-19 09:15:08.414994 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-02-19 09:15:08.415009 | orchestrator | Wednesday 19 February 2025 09:14:00 +0000 (0:00:28.306) 0:06:07.312 **** 2025-02-19 09:15:08.415023 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:08.415038 | orchestrator | 2025-02-19 09:15:08.415052 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-02-19 09:15:08.415067 | orchestrator | Wednesday 19 February 2025 09:14:15 +0000 (0:00:15.183) 0:06:22.496 **** 2025-02-19 09:15:08.415081 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:08.415114 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:08.415128 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:08.415142 | orchestrator | 2025-02-19 09:15:08.415156 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-02-19 09:15:08.415170 | orchestrator | Wednesday 19 February 2025 09:14:34 +0000 (0:00:19.152) 0:06:41.648 **** 2025-02-19 09:15:08.415184 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:08.415198 | orchestrator | 2025-02-19 09:15:08.415212 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-02-19 09:15:08.415225 | orchestrator | Wednesday 19 February 2025 09:14:47 +0000 (0:00:13.021) 0:06:54.670 **** 2025-02-19 09:15:08.415239 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:15:08.415253 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:15:08.415267 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:15:08.415281 | orchestrator | 2025-02-19 09:15:08.415296 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:15:08.415379 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-19 09:15:08.415399 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-02-19 09:15:08.415415 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-02-19 09:15:08.415431 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-02-19 09:15:08.415447 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-19 09:15:08.415463 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-19 09:15:08.415480 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-19 09:15:08.415496 | orchestrator | 2025-02-19 09:15:08.415511 | orchestrator | 2025-02-19 09:15:08.415527 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:15:08.415543 | orchestrator | Wednesday 19 February 2025 09:15:04 +0000 (0:00:16.970) 0:07:11.640 **** 2025-02-19 09:15:08.415559 | orchestrator | =============================================================================== 2025-02-19 09:15:08.415581 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 92.98s 2025-02-19 09:15:08.415598 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 33.40s 2025-02-19 09:15:08.415614 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 29.23s 2025-02-19 09:15:08.415630 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 28.31s 2025-02-19 09:15:08.415646 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 27.74s 2025-02-19 09:15:08.415661 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 19.15s 2025-02-19 09:15:08.415677 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 18.27s 2025-02-19 09:15:08.415693 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 17.95s 2025-02-19 09:15:08.415709 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 16.97s 2025-02-19 09:15:08.415723 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 15.18s 2025-02-19 09:15:08.415736 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 13.02s 2025-02-19 09:15:08.415750 | orchestrator | prometheus : Copying over prometheus web config file ------------------- 12.10s 2025-02-19 09:15:08.415764 | orchestrator | prometheus : Check prometheus containers ------------------------------- 10.26s 2025-02-19 09:15:08.415778 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 8.14s 2025-02-19 09:15:08.415791 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.95s 2025-02-19 09:15:08.415805 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 7.59s 2025-02-19 09:15:08.415819 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.59s 2025-02-19 09:15:08.415832 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 7.57s 2025-02-19 09:15:08.415846 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 6.15s 2025-02-19 09:15:08.415860 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 6.10s 2025-02-19 09:15:08.415891 | orchestrator | 2025-02-19 09:15:08 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:08.416551 | orchestrator | 2025-02-19 09:15:08 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:08.416592 | orchestrator | 2025-02-19 09:15:08 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:08.416614 | orchestrator | 2025-02-19 09:15:08 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:08.418152 | orchestrator | 2025-02-19 09:15:08 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:11.455855 | orchestrator | 2025-02-19 09:15:08 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:11.456009 | orchestrator | 2025-02-19 09:15:11 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:11.457703 | orchestrator | 2025-02-19 09:15:11 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:11.458236 | orchestrator | 2025-02-19 09:15:11 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:11.458263 | orchestrator | 2025-02-19 09:15:11 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:11.459611 | orchestrator | 2025-02-19 09:15:11 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:14.504771 | orchestrator | 2025-02-19 09:15:11 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:14.504921 | orchestrator | 2025-02-19 09:15:14 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:14.506245 | orchestrator | 2025-02-19 09:15:14 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:14.507725 | orchestrator | 2025-02-19 09:15:14 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:14.509366 | orchestrator | 2025-02-19 09:15:14 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:14.510855 | orchestrator | 2025-02-19 09:15:14 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:14.511111 | orchestrator | 2025-02-19 09:15:14 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:17.539701 | orchestrator | 2025-02-19 09:15:17 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:17.539963 | orchestrator | 2025-02-19 09:15:17 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:17.540690 | orchestrator | 2025-02-19 09:15:17 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:17.541549 | orchestrator | 2025-02-19 09:15:17 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:17.543607 | orchestrator | 2025-02-19 09:15:17 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:17.543806 | orchestrator | 2025-02-19 09:15:17 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:20.592560 | orchestrator | 2025-02-19 09:15:20 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:20.592741 | orchestrator | 2025-02-19 09:15:20 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:20.594012 | orchestrator | 2025-02-19 09:15:20 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:20.594775 | orchestrator | 2025-02-19 09:15:20 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:20.595483 | orchestrator | 2025-02-19 09:15:20 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:20.595567 | orchestrator | 2025-02-19 09:15:20 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:23.625530 | orchestrator | 2025-02-19 09:15:23 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:23.626355 | orchestrator | 2025-02-19 09:15:23 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:23.626381 | orchestrator | 2025-02-19 09:15:23 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:23.626393 | orchestrator | 2025-02-19 09:15:23 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:23.628155 | orchestrator | 2025-02-19 09:15:23 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:26.660632 | orchestrator | 2025-02-19 09:15:23 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:26.660789 | orchestrator | 2025-02-19 09:15:26 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:26.661132 | orchestrator | 2025-02-19 09:15:26 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:26.661896 | orchestrator | 2025-02-19 09:15:26 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:26.665609 | orchestrator | 2025-02-19 09:15:26 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:26.665941 | orchestrator | 2025-02-19 09:15:26 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:29.699540 | orchestrator | 2025-02-19 09:15:26 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:29.699703 | orchestrator | 2025-02-19 09:15:29 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:29.699907 | orchestrator | 2025-02-19 09:15:29 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:29.699940 | orchestrator | 2025-02-19 09:15:29 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:29.704355 | orchestrator | 2025-02-19 09:15:29 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:29.705020 | orchestrator | 2025-02-19 09:15:29 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:32.740820 | orchestrator | 2025-02-19 09:15:29 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:32.740986 | orchestrator | 2025-02-19 09:15:32 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:32.741401 | orchestrator | 2025-02-19 09:15:32 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:32.742563 | orchestrator | 2025-02-19 09:15:32 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:32.743784 | orchestrator | 2025-02-19 09:15:32 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:32.744685 | orchestrator | 2025-02-19 09:15:32 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:32.745182 | orchestrator | 2025-02-19 09:15:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:35.793447 | orchestrator | 2025-02-19 09:15:35 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:35.796699 | orchestrator | 2025-02-19 09:15:35 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:35.797172 | orchestrator | 2025-02-19 09:15:35 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:35.798007 | orchestrator | 2025-02-19 09:15:35 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:35.798678 | orchestrator | 2025-02-19 09:15:35 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:38.835730 | orchestrator | 2025-02-19 09:15:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:38.835825 | orchestrator | 2025-02-19 09:15:38 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:38.836190 | orchestrator | 2025-02-19 09:15:38 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:38.836205 | orchestrator | 2025-02-19 09:15:38 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state STARTED 2025-02-19 09:15:38.837362 | orchestrator | 2025-02-19 09:15:38 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:38.837940 | orchestrator | 2025-02-19 09:15:38 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:38.838159 | orchestrator | 2025-02-19 09:15:38 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:41.886086 | orchestrator | 2025-02-19 09:15:41 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:41.888393 | orchestrator | 2025-02-19 09:15:41 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:41.888911 | orchestrator | 2025-02-19 09:15:41 | INFO  | Task 9d749719-e4e4-4bc8-80e5-0795801cf979 is in state SUCCESS 2025-02-19 09:15:41.893704 | orchestrator | 2025-02-19 09:15:41.893767 | orchestrator | 2025-02-19 09:15:41.893807 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-02-19 09:15:41.893824 | orchestrator | 2025-02-19 09:15:41.893838 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-02-19 09:15:41.893853 | orchestrator | Wednesday 19 February 2025 08:50:26 +0000 (0:00:00.895) 0:00:00.895 **** 2025-02-19 09:15:41.893867 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:15:41.893883 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:15:41.893898 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:15:41.893912 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.893926 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.893940 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.893953 | orchestrator | 2025-02-19 09:15:41.893968 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-02-19 09:15:41.893982 | orchestrator | Wednesday 19 February 2025 08:50:31 +0000 (0:00:04.851) 0:00:05.747 **** 2025-02-19 09:15:41.893996 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.894011 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.894078 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.894094 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.894109 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.894123 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.894138 | orchestrator | 2025-02-19 09:15:41.894153 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-02-19 09:15:41.894168 | orchestrator | Wednesday 19 February 2025 08:50:35 +0000 (0:00:03.659) 0:00:09.406 **** 2025-02-19 09:15:41.894183 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.894198 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.894212 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.894227 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.894241 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.894256 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.894272 | orchestrator | 2025-02-19 09:15:41.894300 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-02-19 09:15:41.894372 | orchestrator | Wednesday 19 February 2025 08:50:37 +0000 (0:00:02.045) 0:00:11.452 **** 2025-02-19 09:15:41.894390 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:15:41.894407 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:15:41.894449 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:15:41.894466 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.894483 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.894499 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.894516 | orchestrator | 2025-02-19 09:15:41.894533 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-02-19 09:15:41.894550 | orchestrator | Wednesday 19 February 2025 08:50:39 +0000 (0:00:02.542) 0:00:13.995 **** 2025-02-19 09:15:41.894567 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:15:41.894583 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:15:41.894599 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:15:41.894616 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.894632 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.894647 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.894662 | orchestrator | 2025-02-19 09:15:41.894677 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-02-19 09:15:41.894707 | orchestrator | Wednesday 19 February 2025 08:50:43 +0000 (0:00:04.118) 0:00:18.113 **** 2025-02-19 09:15:41.894722 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:15:41.894737 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:15:41.894752 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:15:41.894766 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.894781 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.894796 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.894810 | orchestrator | 2025-02-19 09:15:41.894825 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-02-19 09:15:41.894840 | orchestrator | Wednesday 19 February 2025 08:50:46 +0000 (0:00:02.980) 0:00:21.094 **** 2025-02-19 09:15:41.894855 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.894869 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.894884 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.894898 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.894913 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.894928 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.894943 | orchestrator | 2025-02-19 09:15:41.894958 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-02-19 09:15:41.894973 | orchestrator | Wednesday 19 February 2025 08:50:48 +0000 (0:00:01.995) 0:00:23.089 **** 2025-02-19 09:15:41.894988 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.895002 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.895017 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.895038 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.895053 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.895067 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.895082 | orchestrator | 2025-02-19 09:15:41.895097 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-02-19 09:15:41.895112 | orchestrator | Wednesday 19 February 2025 08:50:50 +0000 (0:00:01.453) 0:00:24.543 **** 2025-02-19 09:15:41.895127 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-19 09:15:41.895142 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-19 09:15:41.895156 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.895171 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-19 09:15:41.895186 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-19 09:15:41.895201 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.895215 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-19 09:15:41.895230 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-19 09:15:41.895245 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.895260 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-19 09:15:41.895296 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-19 09:15:41.895338 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.895363 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-19 09:15:41.895384 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-19 09:15:41.895404 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.895424 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-19 09:15:41.895448 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-19 09:15:41.895470 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.895490 | orchestrator | 2025-02-19 09:15:41.895505 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-02-19 09:15:41.895519 | orchestrator | Wednesday 19 February 2025 08:50:51 +0000 (0:00:01.422) 0:00:25.966 **** 2025-02-19 09:15:41.895532 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.895546 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.895560 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.895574 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.895588 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.895601 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.895615 | orchestrator | 2025-02-19 09:15:41.895629 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-02-19 09:15:41.895645 | orchestrator | Wednesday 19 February 2025 08:50:54 +0000 (0:00:02.701) 0:00:28.671 **** 2025-02-19 09:15:41.895659 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:15:41.895673 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:15:41.895687 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:15:41.895701 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.895714 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.895728 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.895747 | orchestrator | 2025-02-19 09:15:41.895771 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-02-19 09:15:41.895793 | orchestrator | Wednesday 19 February 2025 08:50:55 +0000 (0:00:01.514) 0:00:30.186 **** 2025-02-19 09:15:41.895815 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.895840 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.895863 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:15:41.895882 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:15:41.895896 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.895910 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:15:41.895923 | orchestrator | 2025-02-19 09:15:41.895937 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-02-19 09:15:41.895951 | orchestrator | Wednesday 19 February 2025 08:51:02 +0000 (0:00:06.221) 0:00:36.407 **** 2025-02-19 09:15:41.895965 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.895979 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.895993 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.896006 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.896020 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.896034 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.896048 | orchestrator | 2025-02-19 09:15:41.896062 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-02-19 09:15:41.896076 | orchestrator | Wednesday 19 February 2025 08:51:03 +0000 (0:00:01.555) 0:00:37.962 **** 2025-02-19 09:15:41.896090 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.896103 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.896117 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.896131 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.896144 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.896158 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.896171 | orchestrator | 2025-02-19 09:15:41.896186 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-02-19 09:15:41.896217 | orchestrator | Wednesday 19 February 2025 08:51:06 +0000 (0:00:02.412) 0:00:40.375 **** 2025-02-19 09:15:41.896231 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.896244 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.896258 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.896272 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.896285 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.896299 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.896333 | orchestrator | 2025-02-19 09:15:41.896348 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-02-19 09:15:41.896362 | orchestrator | Wednesday 19 February 2025 08:51:07 +0000 (0:00:00.904) 0:00:41.280 **** 2025-02-19 09:15:41.896376 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-02-19 09:15:41.896396 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-02-19 09:15:41.896410 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.896424 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-02-19 09:15:41.896438 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-02-19 09:15:41.896452 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.896466 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-02-19 09:15:41.896480 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-02-19 09:15:41.896494 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.896508 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-02-19 09:15:41.896521 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-02-19 09:15:41.896535 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.896549 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-02-19 09:15:41.896563 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-02-19 09:15:41.896577 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.896591 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-02-19 09:15:41.896604 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-02-19 09:15:41.896618 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.896633 | orchestrator | 2025-02-19 09:15:41.896647 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-02-19 09:15:41.896670 | orchestrator | Wednesday 19 February 2025 08:51:08 +0000 (0:00:01.161) 0:00:42.441 **** 2025-02-19 09:15:41.896685 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.896707 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.896722 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.896736 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.896750 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.896764 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.896778 | orchestrator | 2025-02-19 09:15:41.896792 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-02-19 09:15:41.896806 | orchestrator | 2025-02-19 09:15:41.896820 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-02-19 09:15:41.896834 | orchestrator | Wednesday 19 February 2025 08:51:09 +0000 (0:00:01.660) 0:00:44.102 **** 2025-02-19 09:15:41.896847 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.896861 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.896875 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.896889 | orchestrator | 2025-02-19 09:15:41.896904 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-02-19 09:15:41.896918 | orchestrator | Wednesday 19 February 2025 08:51:11 +0000 (0:00:01.467) 0:00:45.569 **** 2025-02-19 09:15:41.896932 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.896946 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.896960 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.896973 | orchestrator | 2025-02-19 09:15:41.896987 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-02-19 09:15:41.897009 | orchestrator | Wednesday 19 February 2025 08:51:12 +0000 (0:00:01.376) 0:00:46.946 **** 2025-02-19 09:15:41.897023 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.897036 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.897050 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.897064 | orchestrator | 2025-02-19 09:15:41.897078 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-02-19 09:15:41.897092 | orchestrator | Wednesday 19 February 2025 08:51:14 +0000 (0:00:01.363) 0:00:48.309 **** 2025-02-19 09:15:41.897106 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.897119 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.897133 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.897147 | orchestrator | 2025-02-19 09:15:41.897161 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-02-19 09:15:41.897174 | orchestrator | Wednesday 19 February 2025 08:51:14 +0000 (0:00:00.899) 0:00:49.209 **** 2025-02-19 09:15:41.897188 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.897202 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.897216 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.897230 | orchestrator | 2025-02-19 09:15:41.897244 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-02-19 09:15:41.897257 | orchestrator | Wednesday 19 February 2025 08:51:15 +0000 (0:00:00.370) 0:00:49.580 **** 2025-02-19 09:15:41.897271 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:15:41.897285 | orchestrator | 2025-02-19 09:15:41.897299 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-02-19 09:15:41.897329 | orchestrator | Wednesday 19 February 2025 08:51:16 +0000 (0:00:00.740) 0:00:50.321 **** 2025-02-19 09:15:41.897344 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.897358 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.897371 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.897385 | orchestrator | 2025-02-19 09:15:41.897399 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-02-19 09:15:41.897413 | orchestrator | Wednesday 19 February 2025 08:51:18 +0000 (0:00:02.180) 0:00:52.501 **** 2025-02-19 09:15:41.897427 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.897441 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.897454 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.897468 | orchestrator | 2025-02-19 09:15:41.897482 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-02-19 09:15:41.897495 | orchestrator | Wednesday 19 February 2025 08:51:19 +0000 (0:00:01.316) 0:00:53.817 **** 2025-02-19 09:15:41.897509 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.897523 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.897537 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.897551 | orchestrator | 2025-02-19 09:15:41.897564 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-02-19 09:15:41.897578 | orchestrator | Wednesday 19 February 2025 08:51:20 +0000 (0:00:00.884) 0:00:54.702 **** 2025-02-19 09:15:41.897592 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.897606 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.897620 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.897634 | orchestrator | 2025-02-19 09:15:41.897648 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-02-19 09:15:41.897661 | orchestrator | Wednesday 19 February 2025 08:51:24 +0000 (0:00:03.845) 0:00:58.547 **** 2025-02-19 09:15:41.897675 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.897689 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.897703 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.897716 | orchestrator | 2025-02-19 09:15:41.897730 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-02-19 09:15:41.897744 | orchestrator | Wednesday 19 February 2025 08:51:24 +0000 (0:00:00.649) 0:00:59.197 **** 2025-02-19 09:15:41.897772 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.897787 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.897801 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.897815 | orchestrator | 2025-02-19 09:15:41.897828 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-02-19 09:15:41.897842 | orchestrator | Wednesday 19 February 2025 08:51:25 +0000 (0:00:00.734) 0:00:59.932 **** 2025-02-19 09:15:41.897856 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.897870 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.897884 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.897898 | orchestrator | 2025-02-19 09:15:41.897912 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-02-19 09:15:41.897926 | orchestrator | Wednesday 19 February 2025 08:51:27 +0000 (0:00:01.934) 0:01:01.866 **** 2025-02-19 09:15:41.897947 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-19 09:15:41.897962 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-19 09:15:41.897977 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-19 09:15:41.897991 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-19 09:15:41.898005 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-19 09:15:41.898064 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-19 09:15:41.898079 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-19 09:15:41.898094 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-19 09:15:41.898110 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-19 09:15:41.898130 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-19 09:15:41.898145 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-19 09:15:41.898159 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-19 09:15:41.898173 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.898187 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.898201 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.898215 | orchestrator | 2025-02-19 09:15:41.898230 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-02-19 09:15:41.898243 | orchestrator | Wednesday 19 February 2025 08:52:12 +0000 (0:00:45.268) 0:01:47.135 **** 2025-02-19 09:15:41.898257 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.898271 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.898285 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.898299 | orchestrator | 2025-02-19 09:15:41.898347 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-02-19 09:15:41.898362 | orchestrator | Wednesday 19 February 2025 08:52:13 +0000 (0:00:00.563) 0:01:47.699 **** 2025-02-19 09:15:41.898376 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.898643 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.898663 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.898691 | orchestrator | 2025-02-19 09:15:41.898705 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-02-19 09:15:41.898720 | orchestrator | Wednesday 19 February 2025 08:52:14 +0000 (0:00:01.471) 0:01:49.170 **** 2025-02-19 09:15:41.898734 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.898748 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.898762 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.898776 | orchestrator | 2025-02-19 09:15:41.898791 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-02-19 09:15:41.898805 | orchestrator | Wednesday 19 February 2025 08:52:16 +0000 (0:00:01.629) 0:01:50.800 **** 2025-02-19 09:15:41.898818 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.898833 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.898846 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.898860 | orchestrator | 2025-02-19 09:15:41.898874 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-02-19 09:15:41.898888 | orchestrator | Wednesday 19 February 2025 08:52:32 +0000 (0:00:16.135) 0:02:06.936 **** 2025-02-19 09:15:41.898902 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.898916 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.898930 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.898943 | orchestrator | 2025-02-19 09:15:41.898958 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-02-19 09:15:41.898976 | orchestrator | Wednesday 19 February 2025 08:52:33 +0000 (0:00:01.073) 0:02:08.009 **** 2025-02-19 09:15:41.898990 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.899004 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.899018 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.899032 | orchestrator | 2025-02-19 09:15:41.899045 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-02-19 09:15:41.899059 | orchestrator | Wednesday 19 February 2025 08:52:34 +0000 (0:00:00.873) 0:02:08.883 **** 2025-02-19 09:15:41.899073 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.899086 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.899100 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.899114 | orchestrator | 2025-02-19 09:15:41.899128 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-02-19 09:15:41.899142 | orchestrator | Wednesday 19 February 2025 08:52:35 +0000 (0:00:00.889) 0:02:09.772 **** 2025-02-19 09:15:41.899156 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.899170 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.899184 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.899198 | orchestrator | 2025-02-19 09:15:41.899212 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-02-19 09:15:41.899225 | orchestrator | Wednesday 19 February 2025 08:52:36 +0000 (0:00:01.260) 0:02:11.033 **** 2025-02-19 09:15:41.899250 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.899264 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.899278 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.899292 | orchestrator | 2025-02-19 09:15:41.899306 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-02-19 09:15:41.899368 | orchestrator | Wednesday 19 February 2025 08:52:37 +0000 (0:00:00.520) 0:02:11.554 **** 2025-02-19 09:15:41.899383 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.899397 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.899411 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.899425 | orchestrator | 2025-02-19 09:15:41.899439 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-02-19 09:15:41.899453 | orchestrator | Wednesday 19 February 2025 08:52:38 +0000 (0:00:00.877) 0:02:12.431 **** 2025-02-19 09:15:41.899467 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.899481 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.899495 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.899509 | orchestrator | 2025-02-19 09:15:41.899523 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-02-19 09:15:41.899559 | orchestrator | Wednesday 19 February 2025 08:52:39 +0000 (0:00:01.113) 0:02:13.545 **** 2025-02-19 09:15:41.899573 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.899587 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.899601 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.899615 | orchestrator | 2025-02-19 09:15:41.899629 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-02-19 09:15:41.899643 | orchestrator | Wednesday 19 February 2025 08:52:41 +0000 (0:00:02.052) 0:02:15.597 **** 2025-02-19 09:15:41.899658 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:15:41.899672 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:15:41.899686 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:15:41.899699 | orchestrator | 2025-02-19 09:15:41.899714 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-02-19 09:15:41.899727 | orchestrator | Wednesday 19 February 2025 08:52:43 +0000 (0:00:01.671) 0:02:17.269 **** 2025-02-19 09:15:41.899747 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.899772 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.899798 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.899821 | orchestrator | 2025-02-19 09:15:41.899843 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-02-19 09:15:41.899866 | orchestrator | Wednesday 19 February 2025 08:52:43 +0000 (0:00:00.632) 0:02:17.901 **** 2025-02-19 09:15:41.899888 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.899909 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.899921 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.899934 | orchestrator | 2025-02-19 09:15:41.899946 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-02-19 09:15:41.899958 | orchestrator | Wednesday 19 February 2025 08:52:44 +0000 (0:00:00.783) 0:02:18.685 **** 2025-02-19 09:15:41.899971 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.899983 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.899996 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.900008 | orchestrator | 2025-02-19 09:15:41.900020 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-02-19 09:15:41.900033 | orchestrator | Wednesday 19 February 2025 08:52:46 +0000 (0:00:01.949) 0:02:20.634 **** 2025-02-19 09:15:41.900045 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.900057 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.900069 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.900081 | orchestrator | 2025-02-19 09:15:41.900094 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-02-19 09:15:41.900116 | orchestrator | Wednesday 19 February 2025 08:52:47 +0000 (0:00:01.006) 0:02:21.641 **** 2025-02-19 09:15:41.900129 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-19 09:15:41.900142 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-19 09:15:41.900155 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-19 09:15:41.900168 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-19 09:15:41.900181 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-19 09:15:41.900193 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-19 09:15:41.900206 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-19 09:15:41.900218 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-19 09:15:41.900230 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-19 09:15:41.900242 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-02-19 09:15:41.900262 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-19 09:15:41.900275 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-19 09:15:41.900287 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-02-19 09:15:41.900299 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-19 09:15:41.900329 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-19 09:15:41.900343 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-19 09:15:41.900364 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-19 09:15:41.900381 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-19 09:15:41.900394 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-19 09:15:41.900406 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-19 09:15:41.900418 | orchestrator | 2025-02-19 09:15:41.900431 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-02-19 09:15:41.900443 | orchestrator | 2025-02-19 09:15:41.900456 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-02-19 09:15:41.900468 | orchestrator | Wednesday 19 February 2025 08:52:50 +0000 (0:00:03.497) 0:02:25.139 **** 2025-02-19 09:15:41.900481 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:15:41.900493 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:15:41.900505 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:15:41.900518 | orchestrator | 2025-02-19 09:15:41.900530 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-02-19 09:15:41.900542 | orchestrator | Wednesday 19 February 2025 08:52:51 +0000 (0:00:00.774) 0:02:25.913 **** 2025-02-19 09:15:41.900554 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:15:41.900567 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:15:41.900579 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:15:41.900596 | orchestrator | 2025-02-19 09:15:41.900609 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-02-19 09:15:41.900621 | orchestrator | Wednesday 19 February 2025 08:52:52 +0000 (0:00:00.678) 0:02:26.592 **** 2025-02-19 09:15:41.900633 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:15:41.900645 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:15:41.900658 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:15:41.900670 | orchestrator | 2025-02-19 09:15:41.900682 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-02-19 09:15:41.900695 | orchestrator | Wednesday 19 February 2025 08:52:52 +0000 (0:00:00.377) 0:02:26.969 **** 2025-02-19 09:15:41.900707 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:15:41.900720 | orchestrator | 2025-02-19 09:15:41.900732 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-02-19 09:15:41.900744 | orchestrator | Wednesday 19 February 2025 08:52:53 +0000 (0:00:00.735) 0:02:27.705 **** 2025-02-19 09:15:41.900757 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.900769 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.900781 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.900793 | orchestrator | 2025-02-19 09:15:41.900806 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-02-19 09:15:41.900818 | orchestrator | Wednesday 19 February 2025 08:52:54 +0000 (0:00:00.889) 0:02:28.595 **** 2025-02-19 09:15:41.900830 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.900843 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.900855 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.900868 | orchestrator | 2025-02-19 09:15:41.900886 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-02-19 09:15:41.900898 | orchestrator | Wednesday 19 February 2025 08:52:55 +0000 (0:00:01.071) 0:02:29.666 **** 2025-02-19 09:15:41.900910 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:15:41.900923 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:15:41.900935 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:15:41.900947 | orchestrator | 2025-02-19 09:15:41.900959 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-02-19 09:15:41.900972 | orchestrator | Wednesday 19 February 2025 08:52:56 +0000 (0:00:00.978) 0:02:30.645 **** 2025-02-19 09:15:41.900984 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:15:41.900996 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:15:41.901008 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:15:41.901020 | orchestrator | 2025-02-19 09:15:41.901033 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-02-19 09:15:41.901045 | orchestrator | Wednesday 19 February 2025 08:52:58 +0000 (0:00:02.249) 0:02:32.895 **** 2025-02-19 09:15:41.901057 | orchestrator | 2025-02-19 09:15:41.901070 | orchestrator | STILL ALIVE [task 'k3s_agent : Manage k3s service' is running] ***************** 2025-02-19 09:15:41.901082 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:15:41.901094 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:15:41.901107 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:15:41.901119 | orchestrator | 2025-02-19 09:15:41.901131 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-02-19 09:15:41.901143 | orchestrator | 2025-02-19 09:15:41.901155 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-02-19 09:15:41.901168 | orchestrator | Wednesday 19 February 2025 08:55:26 +0000 (0:02:28.358) 0:05:01.254 **** 2025-02-19 09:15:41.901180 | orchestrator | ok: [testbed-manager] 2025-02-19 09:15:41.901192 | orchestrator | 2025-02-19 09:15:41.901204 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-02-19 09:15:41.901216 | orchestrator | Wednesday 19 February 2025 08:55:27 +0000 (0:00:00.621) 0:05:01.875 **** 2025-02-19 09:15:41.901228 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:41.901240 | orchestrator | 2025-02-19 09:15:41.901253 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-19 09:15:41.901265 | orchestrator | Wednesday 19 February 2025 08:55:28 +0000 (0:00:00.605) 0:05:02.481 **** 2025-02-19 09:15:41.901277 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-19 09:15:41.901294 | orchestrator | 2025-02-19 09:15:41.901307 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-19 09:15:41.901363 | orchestrator | Wednesday 19 February 2025 08:55:29 +0000 (0:00:01.039) 0:05:03.520 **** 2025-02-19 09:15:41.901375 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:41.901388 | orchestrator | 2025-02-19 09:15:41.901400 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-02-19 09:15:41.901412 | orchestrator | Wednesday 19 February 2025 08:55:30 +0000 (0:00:00.971) 0:05:04.491 **** 2025-02-19 09:15:41.901425 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:41.901437 | orchestrator | 2025-02-19 09:15:41.901455 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-02-19 09:15:41.901468 | orchestrator | Wednesday 19 February 2025 08:55:30 +0000 (0:00:00.714) 0:05:05.206 **** 2025-02-19 09:15:41.901480 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-19 09:15:41.901493 | orchestrator | 2025-02-19 09:15:41.901505 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-02-19 09:15:41.901518 | orchestrator | Wednesday 19 February 2025 08:55:32 +0000 (0:00:01.323) 0:05:06.530 **** 2025-02-19 09:15:41.901531 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-19 09:15:41.901543 | orchestrator | 2025-02-19 09:15:41.901556 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-02-19 09:15:41.901569 | orchestrator | Wednesday 19 February 2025 08:55:32 +0000 (0:00:00.725) 0:05:07.255 **** 2025-02-19 09:15:41.901588 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:41.901600 | orchestrator | 2025-02-19 09:15:41.901613 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-02-19 09:15:41.901625 | orchestrator | Wednesday 19 February 2025 08:55:33 +0000 (0:00:00.743) 0:05:07.999 **** 2025-02-19 09:15:41.901638 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:41.901650 | orchestrator | 2025-02-19 09:15:41.901662 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-02-19 09:15:41.901675 | orchestrator | 2025-02-19 09:15:41.901686 | orchestrator | TASK [osism.commons.kubectl : Gather variables for each operating system] ****** 2025-02-19 09:15:41.901696 | orchestrator | Wednesday 19 February 2025 08:55:34 +0000 (0:00:00.757) 0:05:08.757 **** 2025-02-19 09:15:41.901707 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-02-19 09:15:41.901717 | orchestrator | ok: [testbed-manager] 2025-02-19 09:15:41.901727 | orchestrator | 2025-02-19 09:15:41.901738 | orchestrator | TASK [osism.commons.kubectl : Include distribution specific install tasks] ***** 2025-02-19 09:15:41.901748 | orchestrator | Wednesday 19 February 2025 08:55:34 +0000 (0:00:00.260) 0:05:09.017 **** 2025-02-19 09:15:41.901758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-02-19 09:15:41.901770 | orchestrator | 2025-02-19 09:15:41.901780 | orchestrator | TASK [osism.commons.kubectl : Remove old architecture-dependent repository] **** 2025-02-19 09:15:41.901790 | orchestrator | Wednesday 19 February 2025 08:55:35 +0000 (0:00:00.380) 0:05:09.398 **** 2025-02-19 09:15:41.901800 | orchestrator | ok: [testbed-manager] 2025-02-19 09:15:41.901810 | orchestrator | 2025-02-19 09:15:41.901820 | orchestrator | TASK [osism.commons.kubectl : Install apt-transport-https package] ************* 2025-02-19 09:15:41.901831 | orchestrator | Wednesday 19 February 2025 08:55:37 +0000 (0:00:02.198) 0:05:11.596 **** 2025-02-19 09:15:41.901841 | orchestrator | ok: [testbed-manager] 2025-02-19 09:15:41.901851 | orchestrator | 2025-02-19 09:15:41.901861 | orchestrator | TASK [osism.commons.kubectl : Add repository gpg key] ************************** 2025-02-19 09:15:41.901871 | orchestrator | Wednesday 19 February 2025 08:55:39 +0000 (0:00:02.043) 0:05:13.640 **** 2025-02-19 09:15:41.901881 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:41.901891 | orchestrator | 2025-02-19 09:15:41.901901 | orchestrator | TASK [osism.commons.kubectl : Set permissions of gpg key] ********************** 2025-02-19 09:15:41.901911 | orchestrator | Wednesday 19 February 2025 08:55:40 +0000 (0:00:00.887) 0:05:14.527 **** 2025-02-19 09:15:41.901921 | orchestrator | ok: [testbed-manager] 2025-02-19 09:15:41.901931 | orchestrator | 2025-02-19 09:15:41.901941 | orchestrator | TASK [osism.commons.kubectl : Add repository Debian] *************************** 2025-02-19 09:15:41.901951 | orchestrator | Wednesday 19 February 2025 08:55:40 +0000 (0:00:00.595) 0:05:15.123 **** 2025-02-19 09:15:41.901961 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:41.901971 | orchestrator | 2025-02-19 09:15:41.901981 | orchestrator | TASK [osism.commons.kubectl : Install required packages] *********************** 2025-02-19 09:15:41.901991 | orchestrator | Wednesday 19 February 2025 08:55:48 +0000 (0:00:08.058) 0:05:23.181 **** 2025-02-19 09:15:41.902001 | orchestrator | changed: [testbed-manager] 2025-02-19 09:15:41.902011 | orchestrator | 2025-02-19 09:15:41.902068 | orchestrator | TASK [osism.commons.kubectl : Remove kubectl symlink] ************************** 2025-02-19 09:15:41.902079 | orchestrator | Wednesday 19 February 2025 08:56:03 +0000 (0:00:14.486) 0:05:37.668 **** 2025-02-19 09:15:41.902089 | orchestrator | ok: [testbed-manager] 2025-02-19 09:15:41.902100 | orchestrator | 2025-02-19 09:15:41.902114 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-02-19 09:15:41.902124 | orchestrator | 2025-02-19 09:15:41.902135 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-02-19 09:15:41.902149 | orchestrator | Wednesday 19 February 2025 08:56:04 +0000 (0:00:00.751) 0:05:38.419 **** 2025-02-19 09:15:41.902159 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:15:41.902170 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:15:41.902186 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:15:41.902196 | orchestrator | 2025-02-19 09:15:41.902206 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-02-19 09:15:41.902216 | orchestrator | Wednesday 19 February 2025 08:56:04 +0000 (0:00:00.614) 0:05:39.034 **** 2025-02-19 09:15:41.902226 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.902236 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:15:41.902246 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:15:41.902256 | orchestrator | 2025-02-19 09:15:41.902267 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-02-19 09:15:41.902277 | orchestrator | Wednesday 19 February 2025 08:56:05 +0000 (0:00:00.421) 0:05:39.456 **** 2025-02-19 09:15:41.902287 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:15:41.902297 | orchestrator | 2025-02-19 09:15:41.902322 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-02-19 09:15:41.902333 | orchestrator | Wednesday 19 February 2025 08:56:05 +0000 (0:00:00.615) 0:05:40.071 **** 2025-02-19 09:15:41.902343 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-19 09:15:41.902353 | orchestrator | 2025-02-19 09:15:41.902364 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-02-19 09:15:41.902380 | orchestrator | Wednesday 19 February 2025 08:56:06 +0000 (0:00:00.591) 0:05:40.662 **** 2025-02-19 09:15:41.902390 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:15:41.902400 | orchestrator | 2025-02-19 09:15:41.902410 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-02-19 09:15:41.902420 | orchestrator | Wednesday 19 February 2025 08:56:07 +0000 (0:00:00.602) 0:05:41.264 **** 2025-02-19 09:15:41.902431 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.902441 | orchestrator | 2025-02-19 09:15:41.902451 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-02-19 09:15:41.902461 | orchestrator | Wednesday 19 February 2025 08:56:07 +0000 (0:00:00.757) 0:05:42.021 **** 2025-02-19 09:15:41.902471 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:15:41.902481 | orchestrator | 2025-02-19 09:15:41.902491 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-02-19 09:15:41.902501 | orchestrator | Wednesday 19 February 2025 08:56:08 +0000 (0:00:00.982) 0:05:43.004 **** 2025-02-19 09:15:41.902511 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.902522 | orchestrator | 2025-02-19 09:15:41.902532 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-02-19 09:15:41.902542 | orchestrator | Wednesday 19 February 2025 08:56:09 +0000 (0:00:00.350) 0:05:43.355 **** 2025-02-19 09:15:41.902552 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.902562 | orchestrator | 2025-02-19 09:15:41.902572 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-02-19 09:15:41.902582 | orchestrator | Wednesday 19 February 2025 08:56:09 +0000 (0:00:00.329) 0:05:43.684 **** 2025-02-19 09:15:41.902593 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.902603 | orchestrator | 2025-02-19 09:15:41.902613 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-02-19 09:15:41.902623 | orchestrator | Wednesday 19 February 2025 08:56:09 +0000 (0:00:00.281) 0:05:43.965 **** 2025-02-19 09:15:41.902633 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:15:41.902643 | orchestrator | 2025-02-19 09:15:41.902653 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-02-19 09:15:41.902663 | orchestrator | Wednesday 19 February 2025 08:56:09 +0000 (0:00:00.236) 0:05:44.201 **** 2025-02-19 09:15:41.902673 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-19 09:15:41.902683 | orchestrator | 2025-02-19 09:15:41.902693 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-02-19 09:15:41.902703 | orchestrator | Wednesday 19 February 2025 08:56:21 +0000 (0:00:11.228) 0:05:55.430 **** 2025-02-19 09:15:41.902713 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-02-19 09:15:41.902729 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-02-19 09:15:41.902740 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (29 retries left). 2025-02-19 09:15:41.902750 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (28 retries left). 2025-02-19 09:15:41.902760 | orchestrator | 2025-02-19 09:15:41.902771 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.902781 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (27 retries left). 2025-02-19 09:15:41.902791 | orchestrator | 2025-02-19 09:15:41.902801 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.902811 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (26 retries left). 2025-02-19 09:15:41.902821 | orchestrator | 2025-02-19 09:15:41.902831 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.902841 | orchestrator | 2025-02-19 09:15:41.902851 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.902861 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (25 retries left). 2025-02-19 09:15:41.902872 | orchestrator | 2025-02-19 09:15:41.902882 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.902892 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (24 retries left). 2025-02-19 09:15:41.902902 | orchestrator | 2025-02-19 09:15:41.902916 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.902926 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (23 retries left). 2025-02-19 09:15:41.902937 | orchestrator | 2025-02-19 09:15:41.902947 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.902957 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (22 retries left). 2025-02-19 09:15:41.902967 | orchestrator | 2025-02-19 09:15:41.902977 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.902987 | orchestrator | 2025-02-19 09:15:41.902998 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903008 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (21 retries left). 2025-02-19 09:15:41.903018 | orchestrator | 2025-02-19 09:15:41.903034 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903048 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (20 retries left). 2025-02-19 09:15:41.903058 | orchestrator | 2025-02-19 09:15:41.903068 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903078 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (19 retries left). 2025-02-19 09:15:41.903088 | orchestrator | 2025-02-19 09:15:41.903098 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903113 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (18 retries left). 2025-02-19 09:15:41.903124 | orchestrator | 2025-02-19 09:15:41.903134 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903144 | orchestrator | 2025-02-19 09:15:41.903154 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903164 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (17 retries left). 2025-02-19 09:15:41.903174 | orchestrator | 2025-02-19 09:15:41.903184 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903194 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (16 retries left). 2025-02-19 09:15:41.903209 | orchestrator | 2025-02-19 09:15:41.903219 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903229 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (15 retries left). 2025-02-19 09:15:41.903239 | orchestrator | 2025-02-19 09:15:41.903249 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903259 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (14 retries left). 2025-02-19 09:15:41.903269 | orchestrator | 2025-02-19 09:15:41.903279 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903289 | orchestrator | 2025-02-19 09:15:41.903299 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903325 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (13 retries left). 2025-02-19 09:15:41.903335 | orchestrator | 2025-02-19 09:15:41.903346 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903356 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (12 retries left). 2025-02-19 09:15:41.903366 | orchestrator | 2025-02-19 09:15:41.903376 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903386 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (11 retries left). 2025-02-19 09:15:41.903396 | orchestrator | 2025-02-19 09:15:41.903406 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903417 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (10 retries left). 2025-02-19 09:15:41.903427 | orchestrator | 2025-02-19 09:15:41.903437 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903447 | orchestrator | 2025-02-19 09:15:41.903457 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903467 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (9 retries left). 2025-02-19 09:15:41.903478 | orchestrator | 2025-02-19 09:15:41.903488 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903498 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (8 retries left). 2025-02-19 09:15:41.903508 | orchestrator | 2025-02-19 09:15:41.903518 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903528 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (7 retries left). 2025-02-19 09:15:41.903539 | orchestrator | 2025-02-19 09:15:41.903549 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903559 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (6 retries left). 2025-02-19 09:15:41.903569 | orchestrator | 2025-02-19 09:15:41.903579 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903594 | orchestrator | 2025-02-19 09:15:41.903604 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903614 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (5 retries left). 2025-02-19 09:15:41.903625 | orchestrator | 2025-02-19 09:15:41.903635 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903645 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (4 retries left). 2025-02-19 09:15:41.903655 | orchestrator | 2025-02-19 09:15:41.903665 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903675 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (3 retries left). 2025-02-19 09:15:41.903690 | orchestrator | 2025-02-19 09:15:41.903700 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903711 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (2 retries left). 2025-02-19 09:15:41.903721 | orchestrator | 2025-02-19 09:15:41.903731 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903745 | orchestrator | 2025-02-19 09:15:41.903761 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903777 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (1 retries left). 2025-02-19 09:15:41.903793 | orchestrator | 2025-02-19 09:15:41.903811 | orchestrator | STILL ALIVE [task 'k3s_server_post : Wait for Cilium resources' is running] **** 2025-02-19 09:15:41.903843 | orchestrator | failed: [testbed-node-0 -> localhost] (item=daemonset/cilium) => {"ansible_loop_var": "item", "attempts": 30, "changed": false, "cmd": ["kubectl", "wait", "pods", "--namespace=kube-system", "--selector=k8s-app=cilium", "--for=condition=Ready", "--timeout=30s"], "delta": "0:00:30.099118", "end": "2025-02-19 09:15:39.397935", "item": {"name": "cilium", "selector": "k8s-app=cilium", "type": "daemonset"}, "msg": "non-zero return code", "rc": 1, "start": "2025-02-19 09:15:09.298817", "stderr": "timed out waiting for the condition on pods/cilium-5bsbd\ntimed out waiting for the condition on pods/cilium-5v48x\ntimed out waiting for the condition on pods/cilium-cz7kj\ntimed out waiting for the condition on pods/cilium-sjbvp\ntimed out waiting for the condition on pods/cilium-sm6dl", "stderr_lines": ["timed out waiting for the condition on pods/cilium-5bsbd", "timed out waiting for the condition on pods/cilium-5v48x", "timed out waiting for the condition on pods/cilium-cz7kj", "timed out waiting for the condition on pods/cilium-sjbvp", "timed out waiting for the condition on pods/cilium-sm6dl"], "stdout": "pod/cilium-2lgvz condition met", "stdout_lines": ["pod/cilium-2lgvz condition met"]} 2025-02-19 09:15:41.903861 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-02-19 09:15:41.903871 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-02-19 09:15:41.903881 | orchestrator | 2025-02-19 09:15:41.903892 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:15:41.903902 | orchestrator | testbed-manager : ok=18  changed=10  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:15:41.903913 | orchestrator | testbed-node-0 : ok=38  changed=19  unreachable=0 failed=1  skipped=23  rescued=0 ignored=0 2025-02-19 09:15:41.903924 | orchestrator | testbed-node-1 : ok=31  changed=14  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-02-19 09:15:41.903935 | orchestrator | testbed-node-2 : ok=31  changed=14  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-02-19 09:15:41.903945 | orchestrator | testbed-node-3 : ok=12  changed=6  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-19 09:15:41.903956 | orchestrator | testbed-node-4 : ok=12  changed=6  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-19 09:15:41.903966 | orchestrator | testbed-node-5 : ok=12  changed=6  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-02-19 09:15:41.903976 | orchestrator | 2025-02-19 09:15:41.903986 | orchestrator | 2025-02-19 09:15:41.903996 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:15:41.904013 | orchestrator | Wednesday 19 February 2025 09:15:40 +0000 (0:19:19.502) 0:25:14.933 **** 2025-02-19 09:15:41.904024 | orchestrator | =============================================================================== 2025-02-19 09:15:41.904041 | orchestrator | k3s_server_post : Wait for Cilium resources -------------------------- 1159.50s 2025-02-19 09:15:41.904051 | orchestrator | k3s_agent : Manage k3s service ---------------------------------------- 148.36s 2025-02-19 09:15:41.904061 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 45.27s 2025-02-19 09:15:41.904072 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 16.14s 2025-02-19 09:15:41.904082 | orchestrator | osism.commons.kubectl : Install required packages ---------------------- 14.49s 2025-02-19 09:15:41.904092 | orchestrator | k3s_server_post : Install Cilium --------------------------------------- 11.23s 2025-02-19 09:15:41.904102 | orchestrator | osism.commons.kubectl : Add repository Debian --------------------------- 8.06s 2025-02-19 09:15:41.904112 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.22s 2025-02-19 09:15:41.904122 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.85s 2025-02-19 09:15:41.904132 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 4.12s 2025-02-19 09:15:41.904143 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 3.85s 2025-02-19 09:15:41.904153 | orchestrator | k3s_prereq : Set same timezone on every Server -------------------------- 3.66s 2025-02-19 09:15:41.904163 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.50s 2025-02-19 09:15:41.904173 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.98s 2025-02-19 09:15:41.904183 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.70s 2025-02-19 09:15:41.904193 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.54s 2025-02-19 09:15:41.904203 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.41s 2025-02-19 09:15:41.904214 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 2.25s 2025-02-19 09:15:41.904224 | orchestrator | osism.commons.kubectl : Remove old architecture-dependent repository ---- 2.20s 2025-02-19 09:15:41.904239 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.18s 2025-02-19 09:15:44.931951 | orchestrator | 2025-02-19 09:15:41 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:44.932133 | orchestrator | 2025-02-19 09:15:41 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:44.932163 | orchestrator | 2025-02-19 09:15:41 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:44.932209 | orchestrator | 2025-02-19 09:15:44 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:44.934507 | orchestrator | 2025-02-19 09:15:44 | INFO  | Task bfbad5f7-acd8-4896-a0f0-68f3fc2f74cf is in state STARTED 2025-02-19 09:15:44.935211 | orchestrator | 2025-02-19 09:15:44 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:44.937644 | orchestrator | 2025-02-19 09:15:44 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:44.939177 | orchestrator | 2025-02-19 09:15:44 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:44.940593 | orchestrator | 2025-02-19 09:15:44 | INFO  | Task 21bedcda-9c5e-46e7-a689-506152006bb8 is in state STARTED 2025-02-19 09:15:44.940973 | orchestrator | 2025-02-19 09:15:44 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:47.979014 | orchestrator | 2025-02-19 09:15:47 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:47.979624 | orchestrator | 2025-02-19 09:15:47 | INFO  | Task bfbad5f7-acd8-4896-a0f0-68f3fc2f74cf is in state STARTED 2025-02-19 09:15:47.979908 | orchestrator | 2025-02-19 09:15:47 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:47.980090 | orchestrator | 2025-02-19 09:15:47 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:47.980404 | orchestrator | 2025-02-19 09:15:47 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:47.980991 | orchestrator | 2025-02-19 09:15:47 | INFO  | Task 21bedcda-9c5e-46e7-a689-506152006bb8 is in state STARTED 2025-02-19 09:15:51.032535 | orchestrator | 2025-02-19 09:15:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:51.032712 | orchestrator | 2025-02-19 09:15:51 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:51.034227 | orchestrator | 2025-02-19 09:15:51 | INFO  | Task bfbad5f7-acd8-4896-a0f0-68f3fc2f74cf is in state SUCCESS 2025-02-19 09:15:51.035281 | orchestrator | 2025-02-19 09:15:51 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:51.036455 | orchestrator | 2025-02-19 09:15:51 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:51.038093 | orchestrator | 2025-02-19 09:15:51 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:51.041462 | orchestrator | 2025-02-19 09:15:51 | INFO  | Task 21bedcda-9c5e-46e7-a689-506152006bb8 is in state STARTED 2025-02-19 09:15:54.086204 | orchestrator | 2025-02-19 09:15:51 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:54.086374 | orchestrator | 2025-02-19 09:15:54 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:54.086510 | orchestrator | 2025-02-19 09:15:54 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:54.086532 | orchestrator | 2025-02-19 09:15:54 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:54.087197 | orchestrator | 2025-02-19 09:15:54 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:15:54.087564 | orchestrator | 2025-02-19 09:15:54 | INFO  | Task 21bedcda-9c5e-46e7-a689-506152006bb8 is in state SUCCESS 2025-02-19 09:15:57.137964 | orchestrator | 2025-02-19 09:15:54 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:15:57.138128 | orchestrator | 2025-02-19 09:15:57 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:15:57.142605 | orchestrator | 2025-02-19 09:15:57 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:15:57.142750 | orchestrator | 2025-02-19 09:15:57 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:15:57.143628 | orchestrator | 2025-02-19 09:15:57 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:00.189539 | orchestrator | 2025-02-19 09:15:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:00.189724 | orchestrator | 2025-02-19 09:16:00 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:00.189865 | orchestrator | 2025-02-19 09:16:00 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:00.189907 | orchestrator | 2025-02-19 09:16:00 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:00.190153 | orchestrator | 2025-02-19 09:16:00 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:03.229148 | orchestrator | 2025-02-19 09:16:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:03.229290 | orchestrator | 2025-02-19 09:16:03 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:03.229540 | orchestrator | 2025-02-19 09:16:03 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:03.231154 | orchestrator | 2025-02-19 09:16:03 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:03.231192 | orchestrator | 2025-02-19 09:16:03 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:06.277293 | orchestrator | 2025-02-19 09:16:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:06.277532 | orchestrator | 2025-02-19 09:16:06 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:06.277616 | orchestrator | 2025-02-19 09:16:06 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:06.278084 | orchestrator | 2025-02-19 09:16:06 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:06.278650 | orchestrator | 2025-02-19 09:16:06 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:09.318694 | orchestrator | 2025-02-19 09:16:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:09.318842 | orchestrator | 2025-02-19 09:16:09 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:09.319153 | orchestrator | 2025-02-19 09:16:09 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:09.319626 | orchestrator | 2025-02-19 09:16:09 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:09.320530 | orchestrator | 2025-02-19 09:16:09 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:12.355828 | orchestrator | 2025-02-19 09:16:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:12.356044 | orchestrator | 2025-02-19 09:16:12 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:12.356153 | orchestrator | 2025-02-19 09:16:12 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:12.356955 | orchestrator | 2025-02-19 09:16:12 | INFO  | Task ab61e576-3431-41ff-a762-2e6950ecdffc is in state STARTED 2025-02-19 09:16:12.357667 | orchestrator | 2025-02-19 09:16:12 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:12.358300 | orchestrator | 2025-02-19 09:16:12 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:15.430362 | orchestrator | 2025-02-19 09:16:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:15.430507 | orchestrator | 2025-02-19 09:16:15 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:15.431112 | orchestrator | 2025-02-19 09:16:15 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:15.432031 | orchestrator | 2025-02-19 09:16:15 | INFO  | Task ab61e576-3431-41ff-a762-2e6950ecdffc is in state STARTED 2025-02-19 09:16:15.432642 | orchestrator | 2025-02-19 09:16:15 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:15.433559 | orchestrator | 2025-02-19 09:16:15 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:18.492776 | orchestrator | 2025-02-19 09:16:15 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:18.492918 | orchestrator | 2025-02-19 09:16:18 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:18.496401 | orchestrator | 2025-02-19 09:16:18 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:18.496532 | orchestrator | 2025-02-19 09:16:18 | INFO  | Task ab61e576-3431-41ff-a762-2e6950ecdffc is in state STARTED 2025-02-19 09:16:18.502931 | orchestrator | 2025-02-19 09:16:18 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:21.591966 | orchestrator | 2025-02-19 09:16:18 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:21.592140 | orchestrator | 2025-02-19 09:16:18 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:21.592192 | orchestrator | 2025-02-19 09:16:21 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:21.593304 | orchestrator | 2025-02-19 09:16:21 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:21.594125 | orchestrator | 2025-02-19 09:16:21 | INFO  | Task ab61e576-3431-41ff-a762-2e6950ecdffc is in state STARTED 2025-02-19 09:16:21.595262 | orchestrator | 2025-02-19 09:16:21 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:21.598171 | orchestrator | 2025-02-19 09:16:21 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:21.598494 | orchestrator | 2025-02-19 09:16:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:24.645481 | orchestrator | 2025-02-19 09:16:24 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:24.645920 | orchestrator | 2025-02-19 09:16:24 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:24.646482 | orchestrator | 2025-02-19 09:16:24 | INFO  | Task ab61e576-3431-41ff-a762-2e6950ecdffc is in state STARTED 2025-02-19 09:16:24.647311 | orchestrator | 2025-02-19 09:16:24 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:27.678712 | orchestrator | 2025-02-19 09:16:24 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:27.678806 | orchestrator | 2025-02-19 09:16:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:27.678832 | orchestrator | 2025-02-19 09:16:27 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:27.680394 | orchestrator | 2025-02-19 09:16:27 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:27.682986 | orchestrator | 2025-02-19 09:16:27 | INFO  | Task ab61e576-3431-41ff-a762-2e6950ecdffc is in state STARTED 2025-02-19 09:16:27.685955 | orchestrator | 2025-02-19 09:16:27 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:27.687686 | orchestrator | 2025-02-19 09:16:27 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:27.688032 | orchestrator | 2025-02-19 09:16:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:30.742648 | orchestrator | 2025-02-19 09:16:30 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:30.742872 | orchestrator | 2025-02-19 09:16:30 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:30.743341 | orchestrator | 2025-02-19 09:16:30 | INFO  | Task ab61e576-3431-41ff-a762-2e6950ecdffc is in state SUCCESS 2025-02-19 09:16:30.747277 | orchestrator | 2025-02-19 09:16:30 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:30.751295 | orchestrator | 2025-02-19 09:16:30 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:33.793436 | orchestrator | 2025-02-19 09:16:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:33.793545 | orchestrator | 2025-02-19 09:16:33 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:33.795302 | orchestrator | 2025-02-19 09:16:33 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:33.795363 | orchestrator | 2025-02-19 09:16:33 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:33.796670 | orchestrator | 2025-02-19 09:16:33 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:36.834386 | orchestrator | 2025-02-19 09:16:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:36.834573 | orchestrator | 2025-02-19 09:16:36 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:36.834724 | orchestrator | 2025-02-19 09:16:36 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:36.836390 | orchestrator | 2025-02-19 09:16:36 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:36.837881 | orchestrator | 2025-02-19 09:16:36 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:39.881493 | orchestrator | 2025-02-19 09:16:36 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:39.881646 | orchestrator | 2025-02-19 09:16:39 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:39.882119 | orchestrator | 2025-02-19 09:16:39 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:39.883249 | orchestrator | 2025-02-19 09:16:39 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:39.884355 | orchestrator | 2025-02-19 09:16:39 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:42.918316 | orchestrator | 2025-02-19 09:16:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:42.918568 | orchestrator | 2025-02-19 09:16:42 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:42.919098 | orchestrator | 2025-02-19 09:16:42 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:42.919140 | orchestrator | 2025-02-19 09:16:42 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:42.920574 | orchestrator | 2025-02-19 09:16:42 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:45.981873 | orchestrator | 2025-02-19 09:16:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:45.981994 | orchestrator | 2025-02-19 09:16:45 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:45.982170 | orchestrator | 2025-02-19 09:16:45 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:45.984799 | orchestrator | 2025-02-19 09:16:45 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:45.985944 | orchestrator | 2025-02-19 09:16:45 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:49.057350 | orchestrator | 2025-02-19 09:16:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:49.057449 | orchestrator | 2025-02-19 09:16:49 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:49.057502 | orchestrator | 2025-02-19 09:16:49 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:49.058384 | orchestrator | 2025-02-19 09:16:49 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:49.059237 | orchestrator | 2025-02-19 09:16:49 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:49.059361 | orchestrator | 2025-02-19 09:16:49 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:52.091285 | orchestrator | 2025-02-19 09:16:52 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:52.091652 | orchestrator | 2025-02-19 09:16:52 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:52.092521 | orchestrator | 2025-02-19 09:16:52 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:52.093120 | orchestrator | 2025-02-19 09:16:52 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:55.132670 | orchestrator | 2025-02-19 09:16:52 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:55.132822 | orchestrator | 2025-02-19 09:16:55 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:55.133656 | orchestrator | 2025-02-19 09:16:55 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:55.133694 | orchestrator | 2025-02-19 09:16:55 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:55.134533 | orchestrator | 2025-02-19 09:16:55 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:16:55.134773 | orchestrator | 2025-02-19 09:16:55 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:16:58.177862 | orchestrator | 2025-02-19 09:16:58 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:16:58.179505 | orchestrator | 2025-02-19 09:16:58 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:16:58.179560 | orchestrator | 2025-02-19 09:16:58 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:16:58.181725 | orchestrator | 2025-02-19 09:16:58 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:17:01.213735 | orchestrator | 2025-02-19 09:16:58 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:01.213979 | orchestrator | 2025-02-19 09:17:01 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:17:01.214156 | orchestrator | 2025-02-19 09:17:01 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:01.214827 | orchestrator | 2025-02-19 09:17:01 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:01.215648 | orchestrator | 2025-02-19 09:17:01 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:17:04.253488 | orchestrator | 2025-02-19 09:17:01 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:04.253637 | orchestrator | 2025-02-19 09:17:04 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:17:04.253786 | orchestrator | 2025-02-19 09:17:04 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:04.253811 | orchestrator | 2025-02-19 09:17:04 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:04.254634 | orchestrator | 2025-02-19 09:17:04 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:17:07.296896 | orchestrator | 2025-02-19 09:17:04 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:07.297046 | orchestrator | 2025-02-19 09:17:07 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:17:07.297576 | orchestrator | 2025-02-19 09:17:07 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:07.297743 | orchestrator | 2025-02-19 09:17:07 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:07.298571 | orchestrator | 2025-02-19 09:17:07 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state STARTED 2025-02-19 09:17:07.299225 | orchestrator | 2025-02-19 09:17:07 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:10.360837 | orchestrator | 2025-02-19 09:17:10 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:17:10.365009 | orchestrator | 2025-02-19 09:17:10 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:10.365757 | orchestrator | 2025-02-19 09:17:10 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:10.374922 | orchestrator | 2025-02-19 09:17:10 | INFO  | Task 8fc8c293-66d9-41dc-a7db-db24980ab1fb is in state SUCCESS 2025-02-19 09:17:10.375914 | orchestrator | 2025-02-19 09:17:10.375989 | orchestrator | 2025-02-19 09:17:10.376033 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-02-19 09:17:10.376049 | orchestrator | 2025-02-19 09:17:10.376077 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-19 09:17:10.376091 | orchestrator | Wednesday 19 February 2025 09:15:46 +0000 (0:00:00.183) 0:00:00.183 **** 2025-02-19 09:17:10.376106 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-19 09:17:10.376132 | orchestrator | 2025-02-19 09:17:10.376147 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-19 09:17:10.376179 | orchestrator | Wednesday 19 February 2025 09:15:47 +0000 (0:00:00.991) 0:00:01.174 **** 2025-02-19 09:17:10.376195 | orchestrator | changed: [testbed-manager] 2025-02-19 09:17:10.376210 | orchestrator | 2025-02-19 09:17:10.376223 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-02-19 09:17:10.376237 | orchestrator | Wednesday 19 February 2025 09:15:48 +0000 (0:00:01.609) 0:00:02.784 **** 2025-02-19 09:17:10.376251 | orchestrator | changed: [testbed-manager] 2025-02-19 09:17:10.376266 | orchestrator | 2025-02-19 09:17:10.376280 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:17:10.376294 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:17:10.376309 | orchestrator | 2025-02-19 09:17:10.376323 | orchestrator | 2025-02-19 09:17:10.376387 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:17:10.376402 | orchestrator | Wednesday 19 February 2025 09:15:49 +0000 (0:00:00.563) 0:00:03.348 **** 2025-02-19 09:17:10.376415 | orchestrator | =============================================================================== 2025-02-19 09:17:10.376429 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.61s 2025-02-19 09:17:10.376460 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.99s 2025-02-19 09:17:10.376474 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.56s 2025-02-19 09:17:10.376488 | orchestrator | 2025-02-19 09:17:10.376502 | orchestrator | 2025-02-19 09:17:10.376517 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-02-19 09:17:10.376533 | orchestrator | 2025-02-19 09:17:10.376548 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-02-19 09:17:10.376564 | orchestrator | Wednesday 19 February 2025 09:15:46 +0000 (0:00:00.250) 0:00:00.250 **** 2025-02-19 09:17:10.376579 | orchestrator | ok: [testbed-manager] 2025-02-19 09:17:10.376597 | orchestrator | 2025-02-19 09:17:10.376612 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-02-19 09:17:10.376628 | orchestrator | Wednesday 19 February 2025 09:15:46 +0000 (0:00:00.760) 0:00:01.011 **** 2025-02-19 09:17:10.376644 | orchestrator | ok: [testbed-manager] 2025-02-19 09:17:10.376659 | orchestrator | 2025-02-19 09:17:10.376675 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-19 09:17:10.376714 | orchestrator | Wednesday 19 February 2025 09:15:47 +0000 (0:00:00.801) 0:00:01.812 **** 2025-02-19 09:17:10.376731 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-19 09:17:10.376746 | orchestrator | 2025-02-19 09:17:10.376762 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-19 09:17:10.376778 | orchestrator | Wednesday 19 February 2025 09:15:48 +0000 (0:00:01.072) 0:00:02.885 **** 2025-02-19 09:17:10.376794 | orchestrator | changed: [testbed-manager] 2025-02-19 09:17:10.376809 | orchestrator | 2025-02-19 09:17:10.376824 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-02-19 09:17:10.376840 | orchestrator | Wednesday 19 February 2025 09:15:50 +0000 (0:00:01.319) 0:00:04.204 **** 2025-02-19 09:17:10.376855 | orchestrator | changed: [testbed-manager] 2025-02-19 09:17:10.376876 | orchestrator | 2025-02-19 09:17:10.376890 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-02-19 09:17:10.376904 | orchestrator | Wednesday 19 February 2025 09:15:51 +0000 (0:00:00.864) 0:00:05.069 **** 2025-02-19 09:17:10.376918 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-19 09:17:10.376932 | orchestrator | 2025-02-19 09:17:10.376946 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-02-19 09:17:10.376960 | orchestrator | Wednesday 19 February 2025 09:15:52 +0000 (0:00:01.223) 0:00:06.292 **** 2025-02-19 09:17:10.376974 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-19 09:17:10.376988 | orchestrator | 2025-02-19 09:17:10.377001 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-02-19 09:17:10.377016 | orchestrator | Wednesday 19 February 2025 09:15:52 +0000 (0:00:00.429) 0:00:06.722 **** 2025-02-19 09:17:10.377029 | orchestrator | ok: [testbed-manager] 2025-02-19 09:17:10.377043 | orchestrator | 2025-02-19 09:17:10.377057 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-02-19 09:17:10.377071 | orchestrator | Wednesday 19 February 2025 09:15:53 +0000 (0:00:00.498) 0:00:07.220 **** 2025-02-19 09:17:10.377085 | orchestrator | ok: [testbed-manager] 2025-02-19 09:17:10.377099 | orchestrator | 2025-02-19 09:17:10.377113 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:17:10.377127 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:17:10.377141 | orchestrator | 2025-02-19 09:17:10.377155 | orchestrator | 2025-02-19 09:17:10.377169 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:17:10.377183 | orchestrator | Wednesday 19 February 2025 09:15:53 +0000 (0:00:00.371) 0:00:07.592 **** 2025-02-19 09:17:10.377197 | orchestrator | =============================================================================== 2025-02-19 09:17:10.377211 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.32s 2025-02-19 09:17:10.377225 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.22s 2025-02-19 09:17:10.377239 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.07s 2025-02-19 09:17:10.377265 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.86s 2025-02-19 09:17:10.377280 | orchestrator | Create .kube directory -------------------------------------------------- 0.80s 2025-02-19 09:17:10.377294 | orchestrator | Get home directory of operator user ------------------------------------- 0.76s 2025-02-19 09:17:10.377313 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.50s 2025-02-19 09:17:10.377355 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.43s 2025-02-19 09:17:10.377370 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.37s 2025-02-19 09:17:10.377384 | orchestrator | 2025-02-19 09:17:10.377398 | orchestrator | None 2025-02-19 09:17:10.377412 | orchestrator | 2025-02-19 09:17:10.377425 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:17:10.377439 | orchestrator | 2025-02-19 09:17:10.377461 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:17:10.377475 | orchestrator | Wednesday 19 February 2025 09:09:27 +0000 (0:00:00.730) 0:00:00.730 **** 2025-02-19 09:17:10.377489 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:17:10.377503 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:17:10.377517 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:17:10.377530 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:17:10.377544 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:17:10.377558 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:17:10.377571 | orchestrator | 2025-02-19 09:17:10.377585 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:17:10.377599 | orchestrator | Wednesday 19 February 2025 09:09:29 +0000 (0:00:01.529) 0:00:02.259 **** 2025-02-19 09:17:10.377613 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-02-19 09:17:10.377627 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-02-19 09:17:10.377641 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-02-19 09:17:10.377655 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-02-19 09:17:10.377669 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-02-19 09:17:10.377682 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-02-19 09:17:10.377696 | orchestrator | 2025-02-19 09:17:10.377710 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-02-19 09:17:10.377724 | orchestrator | 2025-02-19 09:17:10.377738 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-19 09:17:10.377752 | orchestrator | Wednesday 19 February 2025 09:09:30 +0000 (0:00:01.333) 0:00:03.593 **** 2025-02-19 09:17:10.377767 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:17:10.377781 | orchestrator | 2025-02-19 09:17:10.377795 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-02-19 09:17:10.377809 | orchestrator | Wednesday 19 February 2025 09:09:32 +0000 (0:00:02.238) 0:00:05.831 **** 2025-02-19 09:17:10.377823 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:17:10.377837 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:17:10.377851 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:17:10.377865 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:17:10.377879 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:17:10.377893 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:17:10.377906 | orchestrator | 2025-02-19 09:17:10.377921 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-02-19 09:17:10.377934 | orchestrator | Wednesday 19 February 2025 09:09:35 +0000 (0:00:02.691) 0:00:08.523 **** 2025-02-19 09:17:10.377948 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:17:10.377962 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:17:10.377976 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:17:10.377990 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:17:10.378003 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:17:10.378072 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:17:10.378090 | orchestrator | 2025-02-19 09:17:10.378104 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-02-19 09:17:10.378118 | orchestrator | Wednesday 19 February 2025 09:09:37 +0000 (0:00:02.459) 0:00:10.982 **** 2025-02-19 09:17:10.378132 | orchestrator | ok: [testbed-node-0] => { 2025-02-19 09:17:10.378146 | orchestrator |  "changed": false, 2025-02-19 09:17:10.378160 | orchestrator |  "msg": "All assertions passed" 2025-02-19 09:17:10.378174 | orchestrator | } 2025-02-19 09:17:10.378188 | orchestrator | ok: [testbed-node-1] => { 2025-02-19 09:17:10.378202 | orchestrator |  "changed": false, 2025-02-19 09:17:10.378216 | orchestrator |  "msg": "All assertions passed" 2025-02-19 09:17:10.378230 | orchestrator | } 2025-02-19 09:17:10.378244 | orchestrator | ok: [testbed-node-2] => { 2025-02-19 09:17:10.378258 | orchestrator |  "changed": false, 2025-02-19 09:17:10.378272 | orchestrator |  "msg": "All assertions passed" 2025-02-19 09:17:10.378293 | orchestrator | } 2025-02-19 09:17:10.378307 | orchestrator | ok: [testbed-node-3] => { 2025-02-19 09:17:10.378321 | orchestrator |  "changed": false, 2025-02-19 09:17:10.378352 | orchestrator |  "msg": "All assertions passed" 2025-02-19 09:17:10.378366 | orchestrator | } 2025-02-19 09:17:10.378380 | orchestrator | ok: [testbed-node-4] => { 2025-02-19 09:17:10.378395 | orchestrator |  "changed": false, 2025-02-19 09:17:10.378408 | orchestrator |  "msg": "All assertions passed" 2025-02-19 09:17:10.378422 | orchestrator | } 2025-02-19 09:17:10.378436 | orchestrator | ok: [testbed-node-5] => { 2025-02-19 09:17:10.378450 | orchestrator |  "changed": false, 2025-02-19 09:17:10.378464 | orchestrator |  "msg": "All assertions passed" 2025-02-19 09:17:10.378478 | orchestrator | } 2025-02-19 09:17:10.378492 | orchestrator | 2025-02-19 09:17:10.378545 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-02-19 09:17:10.378560 | orchestrator | Wednesday 19 February 2025 09:09:39 +0000 (0:00:01.813) 0:00:12.795 **** 2025-02-19 09:17:10.378574 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.378588 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.378602 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.378616 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.378709 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.378726 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.378740 | orchestrator | 2025-02-19 09:17:10.378754 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-02-19 09:17:10.378768 | orchestrator | Wednesday 19 February 2025 09:09:40 +0000 (0:00:01.101) 0:00:13.896 **** 2025-02-19 09:17:10.378790 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-02-19 09:17:10.378804 | orchestrator | 2025-02-19 09:17:10.378818 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-02-19 09:17:10.378832 | orchestrator | Wednesday 19 February 2025 09:09:44 +0000 (0:00:03.928) 0:00:17.825 **** 2025-02-19 09:17:10.378846 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-02-19 09:17:10.378862 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-02-19 09:17:10.378888 | orchestrator | 2025-02-19 09:17:10.378902 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-02-19 09:17:10.378916 | orchestrator | Wednesday 19 February 2025 09:09:52 +0000 (0:00:07.387) 0:00:25.213 **** 2025-02-19 09:17:10.378965 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-19 09:17:10.378979 | orchestrator | 2025-02-19 09:17:10.378993 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-02-19 09:17:10.379062 | orchestrator | Wednesday 19 February 2025 09:09:56 +0000 (0:00:04.061) 0:00:29.275 **** 2025-02-19 09:17:10.379077 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-19 09:17:10.379091 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-02-19 09:17:10.379105 | orchestrator | 2025-02-19 09:17:10.379119 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-02-19 09:17:10.379133 | orchestrator | Wednesday 19 February 2025 09:10:00 +0000 (0:00:04.625) 0:00:33.900 **** 2025-02-19 09:17:10.379147 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-19 09:17:10.379187 | orchestrator | 2025-02-19 09:17:10.379202 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-02-19 09:17:10.379216 | orchestrator | Wednesday 19 February 2025 09:10:04 +0000 (0:00:03.887) 0:00:37.788 **** 2025-02-19 09:17:10.379230 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-02-19 09:17:10.379244 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-02-19 09:17:10.379258 | orchestrator | 2025-02-19 09:17:10.379272 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-19 09:17:10.379293 | orchestrator | Wednesday 19 February 2025 09:10:14 +0000 (0:00:10.018) 0:00:47.807 **** 2025-02-19 09:17:10.379316 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.379386 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.379403 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.379417 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.379430 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.379442 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.379455 | orchestrator | 2025-02-19 09:17:10.379467 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-02-19 09:17:10.379493 | orchestrator | Wednesday 19 February 2025 09:10:15 +0000 (0:00:00.787) 0:00:48.594 **** 2025-02-19 09:17:10.379505 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.379518 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.379538 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.379552 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.379565 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.379578 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.379602 | orchestrator | 2025-02-19 09:17:10.379616 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-02-19 09:17:10.379644 | orchestrator | Wednesday 19 February 2025 09:10:20 +0000 (0:00:04.532) 0:00:53.126 **** 2025-02-19 09:17:10.379665 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:17:10.379679 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:17:10.379691 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:17:10.379727 | orchestrator | ok: [testbed-node-3] 2025-02-19 09:17:10.379740 | orchestrator | ok: [testbed-node-4] 2025-02-19 09:17:10.379752 | orchestrator | ok: [testbed-node-5] 2025-02-19 09:17:10.379764 | orchestrator | 2025-02-19 09:17:10.379777 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-02-19 09:17:10.379790 | orchestrator | Wednesday 19 February 2025 09:10:21 +0000 (0:00:01.223) 0:00:54.350 **** 2025-02-19 09:17:10.379802 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.379814 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.379827 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.379839 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.379851 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.379863 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.379876 | orchestrator | 2025-02-19 09:17:10.379888 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-02-19 09:17:10.379901 | orchestrator | Wednesday 19 February 2025 09:10:25 +0000 (0:00:03.958) 0:00:58.309 **** 2025-02-19 09:17:10.379916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.379955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.380063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.380104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.380134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.380179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.380198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.380238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.380251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.380285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.380318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.380349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.380364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.380390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.380417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.380431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.380445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.380460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.380473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.380492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.380517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.380570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.380669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.380695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.380708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.380748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.380774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.380787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.380801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.380820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.380839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.380866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.380938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.380964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.380978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.380991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.381016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.381069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.381101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.381120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381133 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.381146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.381178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.381191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.381729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.381758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.381807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.381855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.381890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.381901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.381924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.381959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.381971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.381995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.382007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.382079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.382094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.382122 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.382133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.382156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.382167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.382177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.382188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.382199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.382251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.382268 | orchestrator | 2025-02-19 09:17:10.382278 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-02-19 09:17:10.382293 | orchestrator | Wednesday 19 February 2025 09:10:31 +0000 (0:00:06.122) 0:01:04.431 **** 2025-02-19 09:17:10.382304 | orchestrator | [WARNING]: Skipped 2025-02-19 09:17:10.382315 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-02-19 09:17:10.382344 | orchestrator | due to this access issue: 2025-02-19 09:17:10.382357 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-02-19 09:17:10.382369 | orchestrator | a directory 2025-02-19 09:17:10.382380 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:17:10.382391 | orchestrator | 2025-02-19 09:17:10.382412 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-19 09:17:10.382424 | orchestrator | Wednesday 19 February 2025 09:10:32 +0000 (0:00:00.603) 0:01:05.035 **** 2025-02-19 09:17:10.382435 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:17:10.382448 | orchestrator | 2025-02-19 09:17:10.382458 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-02-19 09:17:10.382469 | orchestrator | Wednesday 19 February 2025 09:10:35 +0000 (0:00:03.777) 0:01:08.812 **** 2025-02-19 09:17:10.382479 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.382490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.382501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.382518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.382535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.382545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.382556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.382567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.382586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.382597 | orchestrator | 2025-02-19 09:17:10.382607 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-02-19 09:17:10.382618 | orchestrator | Wednesday 19 February 2025 09:10:44 +0000 (0:00:08.764) 0:01:17.576 **** 2025-02-19 09:17:10.382634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.382645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.382655 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.382666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.382677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.382693 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.382703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.382713 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.382728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.382739 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.382750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.382760 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.382771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.382781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.382796 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.382807 | orchestrator | 2025-02-19 09:17:10.382817 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-02-19 09:17:10.382827 | orchestrator | Wednesday 19 February 2025 09:10:50 +0000 (0:00:05.956) 0:01:23.533 **** 2025-02-19 09:17:10.382838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.382860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.382871 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.382882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.382892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.382908 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.382918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.382929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.382939 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.382954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.382964 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.382975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.382985 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.382996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.383016 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.383027 | orchestrator | 2025-02-19 09:17:10.383037 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-02-19 09:17:10.383047 | orchestrator | Wednesday 19 February 2025 09:10:55 +0000 (0:00:05.329) 0:01:28.863 **** 2025-02-19 09:17:10.383058 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.383068 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.383078 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.383088 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.383098 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.383108 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.383119 | orchestrator | 2025-02-19 09:17:10.383129 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-02-19 09:17:10.383140 | orchestrator | Wednesday 19 February 2025 09:11:02 +0000 (0:00:06.438) 0:01:35.301 **** 2025-02-19 09:17:10.383150 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.383160 | orchestrator | 2025-02-19 09:17:10.383170 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-02-19 09:17:10.383181 | orchestrator | Wednesday 19 February 2025 09:11:02 +0000 (0:00:00.102) 0:01:35.404 **** 2025-02-19 09:17:10.383190 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.383200 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.383210 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.383220 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.383230 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.383240 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.383250 | orchestrator | 2025-02-19 09:17:10.383260 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-02-19 09:17:10.383270 | orchestrator | Wednesday 19 February 2025 09:11:03 +0000 (0:00:00.812) 0:01:36.216 **** 2025-02-19 09:17:10.383281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.383297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.383382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.383434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.383446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.383474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.383495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.383506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.383537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.383548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383559 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.383569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.383580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.383632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.383658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.383669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.383700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.383722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.383732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.383763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.383963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.383979 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.383990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.384002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.384062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.384085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.384097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.384142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.384168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.384179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.384200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.384223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384234 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.384245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.384256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.384308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.384380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.384393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.384417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.384551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.384793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.384818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.384835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.384850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384880 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.384890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.384918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.384929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.384960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.384987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.384997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.385029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.385061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.385070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.385093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.385102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.385113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385126 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.385135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.385447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.385457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.385488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.385514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.385523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.385547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.385556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385570 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.385579 | orchestrator | 2025-02-19 09:17:10.385588 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-02-19 09:17:10.385597 | orchestrator | Wednesday 19 February 2025 09:11:10 +0000 (0:00:07.355) 0:01:43.572 **** 2025-02-19 09:17:10.385612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.385622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.385933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.385953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.385963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.385990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.386000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.386009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.386071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.386083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.386093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.386120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.386130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.386140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.386204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.386216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.386225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.386244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.386555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.386570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.386586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.386605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.386819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.386932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.386945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.386963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.387026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.387037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.387264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.387300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.387311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.387376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.387389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.387510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.387527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.387537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.387566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.387584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.387594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.387604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.387621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.387646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.387656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.387687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.387742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.387752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.387761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.387784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.388542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.388562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.388586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.388635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.388647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.388655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.388725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.388745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.388754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.388824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.388836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.389067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.389090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.389100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.389109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.389125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.389138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.389148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.389246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.389260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.389269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.389315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.389494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.389508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.389728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.389745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.389754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.389774 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.389784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.389793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.389802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.389862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.389874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.389895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.389948 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.389957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.389971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.390063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.390082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.390100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.390114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.390188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.390198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.390206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.390269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.390289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.390298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390306 | orchestrator | 2025-02-19 09:17:10.390533 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-02-19 09:17:10.390551 | orchestrator | Wednesday 19 February 2025 09:11:16 +0000 (0:00:05.890) 0:01:49.463 **** 2025-02-19 09:17:10.390560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.390569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.390681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.390704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.390713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.390784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.390879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.390910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.390964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.390976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.391199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.391211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.391279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.391313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.391322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.391385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.391395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.391404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.391621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.391648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.391659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.391716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.391734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.391808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.391829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.391841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.392015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.392044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.392060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.392111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.392122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.392297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.392319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.392351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.392622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.392693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.392719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.392735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.392850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.392871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.392887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.392895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.392902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.393150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.393167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.393184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.393200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.393207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.393280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.393289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393297 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.393304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.393403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.393424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.393445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.393452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.393510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.393544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.393552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.393576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.393623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.393647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.393655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.393669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.393722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.393736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.393750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.393776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.393817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.393831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393839 | orchestrator | 2025-02-19 09:17:10.393846 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-02-19 09:17:10.393853 | orchestrator | Wednesday 19 February 2025 09:11:30 +0000 (0:00:14.185) 0:02:03.649 **** 2025-02-19 09:17:10.393867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.393874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.393942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.393957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.393973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.393980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.394052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.394071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.394078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.394100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.394146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.394163 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.394173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.394249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.394264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.394270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.394297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.394360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.394367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.394387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.394399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.394449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.394488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.394538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.394553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.394568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.394626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.394643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.394664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.394716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.394739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.394752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.394763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.394770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394810 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.394818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.394831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.394851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.394858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.394887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.394919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.394955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.394971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.394982 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.394995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.395010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.395017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.395060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.395078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.395085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.395121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.395135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.395153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.395234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.395262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.395269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.395307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.395350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.395358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.395371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.395413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395422 | orchestrator | 2025-02-19 09:17:10.395428 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-02-19 09:17:10.395435 | orchestrator | Wednesday 19 February 2025 09:11:37 +0000 (0:00:06.555) 0:02:10.205 **** 2025-02-19 09:17:10.395441 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.395448 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.395458 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.395465 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:10.395471 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:17:10.395477 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:17:10.395483 | orchestrator | 2025-02-19 09:17:10.395489 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-02-19 09:17:10.395496 | orchestrator | Wednesday 19 February 2025 09:11:47 +0000 (0:00:10.656) 0:02:20.862 **** 2025-02-19 09:17:10.395510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.395517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.395597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.395619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.395626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.395681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.395711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.395718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.395733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.395774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395788 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.395794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.395801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.395871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.395893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.395900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.395906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.395990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.396003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396024 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.396031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.396077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.396095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.396119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.396127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.396147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.396189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.396221 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.396228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.396246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.396286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.396311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.396318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396338 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.396354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.396401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.396441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.396460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.396507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.396532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.396551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.396558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.396618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.396625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.396644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.396718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.396745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.396752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.396810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.396825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.396836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.396893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.396903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.396922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.396981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.396988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.397020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.397028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.397087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.397104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.397111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.397129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.397174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397184 | orchestrator | 2025-02-19 09:17:10.397191 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-02-19 09:17:10.397198 | orchestrator | Wednesday 19 February 2025 09:11:55 +0000 (0:00:07.345) 0:02:28.207 **** 2025-02-19 09:17:10.397205 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.397212 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.397226 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.397232 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.397239 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.397245 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.397251 | orchestrator | 2025-02-19 09:17:10.397257 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-02-19 09:17:10.397263 | orchestrator | Wednesday 19 February 2025 09:12:00 +0000 (0:00:05.434) 0:02:33.642 **** 2025-02-19 09:17:10.397270 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.397276 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.397282 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.397288 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.397294 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.397305 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.397311 | orchestrator | 2025-02-19 09:17:10.397317 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-02-19 09:17:10.397372 | orchestrator | Wednesday 19 February 2025 09:12:07 +0000 (0:00:06.924) 0:02:40.566 **** 2025-02-19 09:17:10.397380 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.397386 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.397392 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.397398 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.397405 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.397411 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.397417 | orchestrator | 2025-02-19 09:17:10.397423 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-02-19 09:17:10.397429 | orchestrator | Wednesday 19 February 2025 09:12:12 +0000 (0:00:04.879) 0:02:45.445 **** 2025-02-19 09:17:10.397436 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.397442 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.397448 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.397454 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.397460 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.397466 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.397473 | orchestrator | 2025-02-19 09:17:10.397479 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-02-19 09:17:10.397485 | orchestrator | Wednesday 19 February 2025 09:12:17 +0000 (0:00:05.210) 0:02:50.656 **** 2025-02-19 09:17:10.397491 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.397497 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.397504 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.397510 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.397516 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.397522 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.397528 | orchestrator | 2025-02-19 09:17:10.397534 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-02-19 09:17:10.397541 | orchestrator | Wednesday 19 February 2025 09:12:24 +0000 (0:00:07.116) 0:02:57.773 **** 2025-02-19 09:17:10.397547 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.397553 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.397559 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.397566 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.397572 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.397578 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.397584 | orchestrator | 2025-02-19 09:17:10.397590 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-02-19 09:17:10.397596 | orchestrator | Wednesday 19 February 2025 09:12:35 +0000 (0:00:11.160) 0:03:08.934 **** 2025-02-19 09:17:10.397603 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-19 09:17:10.397609 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.397616 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-19 09:17:10.397622 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.397628 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-19 09:17:10.397634 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.397641 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-19 09:17:10.397647 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.397657 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-19 09:17:10.397663 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.397670 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-02-19 09:17:10.397676 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.397686 | orchestrator | 2025-02-19 09:17:10.397693 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-02-19 09:17:10.397699 | orchestrator | Wednesday 19 February 2025 09:12:46 +0000 (0:00:10.203) 0:03:19.138 **** 2025-02-19 09:17:10.397743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.397754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.397830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.397847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.397854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.397874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.397891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.397928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.397951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.397958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.397965 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.397975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.398012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.398048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.398134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.398148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.398154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.398226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.398242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.398249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.398274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.398312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398344 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.398351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.398358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.398391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.398431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.398447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.398480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.398527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.398541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.398565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.398611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.398621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.398635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398645 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.398658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.398665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.398712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.398732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.398743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.398756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.398803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398812 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.398819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.398830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.398897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.398918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.398930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.398944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.398990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.399000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.399031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.399037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399044 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.399082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.399091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.399169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.399216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.399260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.399294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.399301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399307 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.399314 | orchestrator | 2025-02-19 09:17:10.399320 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-02-19 09:17:10.399339 | orchestrator | Wednesday 19 February 2025 09:12:54 +0000 (0:00:08.338) 0:03:27.476 **** 2025-02-19 09:17:10.399387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.399396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.399435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.399514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.399528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.399601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.399608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399614 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.399621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.399659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.399699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.399786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.399793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.399850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.399936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.399946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.399953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.399973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.399990 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.399997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.400050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.400064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.400070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.400125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.400142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400149 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.400155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.400162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.400235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.400248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.400255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.400316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.400369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.400375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.400393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.400432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400442 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.400457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.400465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.400544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.400559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.400565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.400587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.400631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.400637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.400653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.400659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400665 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.400696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.400704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.400732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.400762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.400769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.400785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.400796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.400814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.400836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.400843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.400849 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.400855 | orchestrator | 2025-02-19 09:17:10.400860 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-02-19 09:17:10.400866 | orchestrator | Wednesday 19 February 2025 09:13:04 +0000 (0:00:09.921) 0:03:37.398 **** 2025-02-19 09:17:10.400878 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.400884 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.400889 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.400895 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.400900 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.400906 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.400911 | orchestrator | 2025-02-19 09:17:10.400918 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-02-19 09:17:10.400924 | orchestrator | Wednesday 19 February 2025 09:13:09 +0000 (0:00:05.530) 0:03:42.929 **** 2025-02-19 09:17:10.400929 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.400935 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.400940 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.400945 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:17:10.400951 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:17:10.400956 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:17:10.400962 | orchestrator | 2025-02-19 09:17:10.400967 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-02-19 09:17:10.400973 | orchestrator | Wednesday 19 February 2025 09:13:20 +0000 (0:00:10.659) 0:03:53.588 **** 2025-02-19 09:17:10.400978 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.400984 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.400990 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.400995 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.401001 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.401006 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.401012 | orchestrator | 2025-02-19 09:17:10.401017 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-02-19 09:17:10.401023 | orchestrator | Wednesday 19 February 2025 09:13:25 +0000 (0:00:04.726) 0:03:58.316 **** 2025-02-19 09:17:10.401041 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.401048 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.401054 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.401060 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.401066 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.401076 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.401081 | orchestrator | 2025-02-19 09:17:10.401087 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-02-19 09:17:10.401093 | orchestrator | Wednesday 19 February 2025 09:13:28 +0000 (0:00:03.597) 0:04:01.913 **** 2025-02-19 09:17:10.401099 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.401105 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:10.401110 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.401116 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:17:10.401122 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:17:10.401127 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.401133 | orchestrator | 2025-02-19 09:17:10.401139 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-02-19 09:17:10.401145 | orchestrator | Wednesday 19 February 2025 09:13:44 +0000 (0:00:16.030) 0:04:17.944 **** 2025-02-19 09:17:10.401150 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.401156 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.401162 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.401168 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.401173 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.401179 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.401185 | orchestrator | 2025-02-19 09:17:10.401191 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-02-19 09:17:10.401196 | orchestrator | Wednesday 19 February 2025 09:13:55 +0000 (0:00:10.597) 0:04:28.541 **** 2025-02-19 09:17:10.401202 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.401211 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.401216 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.401222 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.401227 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.401233 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.401238 | orchestrator | 2025-02-19 09:17:10.401244 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-02-19 09:17:10.401249 | orchestrator | Wednesday 19 February 2025 09:14:02 +0000 (0:00:06.917) 0:04:35.458 **** 2025-02-19 09:17:10.401255 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.401261 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.401267 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.401273 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.401279 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.401285 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.401291 | orchestrator | 2025-02-19 09:17:10.401298 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-02-19 09:17:10.401304 | orchestrator | Wednesday 19 February 2025 09:14:09 +0000 (0:00:07.352) 0:04:42.811 **** 2025-02-19 09:17:10.401310 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.401316 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.401322 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.401340 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.401346 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.401352 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.401358 | orchestrator | 2025-02-19 09:17:10.401364 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-02-19 09:17:10.401370 | orchestrator | Wednesday 19 February 2025 09:14:16 +0000 (0:00:06.702) 0:04:49.514 **** 2025-02-19 09:17:10.401375 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.401381 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.401388 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.401393 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.401399 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.401405 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.401411 | orchestrator | 2025-02-19 09:17:10.401417 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-02-19 09:17:10.401426 | orchestrator | Wednesday 19 February 2025 09:14:22 +0000 (0:00:06.033) 0:04:55.547 **** 2025-02-19 09:17:10.401433 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-19 09:17:10.401439 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.401445 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-19 09:17:10.401451 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.401457 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-19 09:17:10.401462 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.401468 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-19 09:17:10.401474 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.401480 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-19 09:17:10.401486 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.401492 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-02-19 09:17:10.401498 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.401504 | orchestrator | 2025-02-19 09:17:10.401510 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-02-19 09:17:10.401516 | orchestrator | Wednesday 19 February 2025 09:14:26 +0000 (0:00:04.390) 0:04:59.937 **** 2025-02-19 09:17:10.401541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.401548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.401591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.401605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.401618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.401635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.401646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.401663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.401681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.401690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401695 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.401701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.401718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.401752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.401777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.401790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.401805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.401816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.401822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.401852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.401861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401866 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.401872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.401877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.401906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.401963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.401969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.401980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.401986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.402051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.402068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.402120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.402125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.402131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.402169 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.402175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.402195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.402213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402222 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.402228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.402234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.402277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.402304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.402371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.402406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.402417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.402475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.402493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.402499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.402523 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.402541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.402558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.402564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402573 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.402579 | orchestrator | 2025-02-19 09:17:10.402584 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-02-19 09:17:10.402590 | orchestrator | Wednesday 19 February 2025 09:14:33 +0000 (0:00:06.316) 0:05:06.254 **** 2025-02-19 09:17:10.402611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.402618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.402644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.402691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.402735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-19 09:17:10.402775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.402805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.402833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.402867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.402872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.402898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-19 09:17:10.402903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.402943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.402984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.402997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-19 09:17:10.403002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.403012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403019 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.403024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.403029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.403035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.403048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.403053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.403064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.403074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.403087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.403097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.403108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.403114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.403122 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.403127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.403154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.403162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.403168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.403173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.403187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.403194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-02-19 09:17:10.403208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-19 09:17:10.403222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:17:10.403227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.403242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.403252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.403262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.403269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.403274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.403292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.403297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.403302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:10.403322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-19 09:17:10.403342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-19 09:17:10.403348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-19 09:17:10.403353 | orchestrator | 2025-02-19 09:17:10.403358 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-19 09:17:10.403363 | orchestrator | Wednesday 19 February 2025 09:14:38 +0000 (0:00:05.465) 0:05:11.720 **** 2025-02-19 09:17:10.403367 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:10.403372 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:10.403377 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:10.403382 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:17:10.403387 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:17:10.403392 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:17:10.403396 | orchestrator | 2025-02-19 09:17:10.403401 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-02-19 09:17:10.403406 | orchestrator | Wednesday 19 February 2025 09:14:40 +0000 (0:00:01.524) 0:05:13.245 **** 2025-02-19 09:17:10.403411 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:10.403415 | orchestrator | 2025-02-19 09:17:10.403420 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-02-19 09:17:10.403425 | orchestrator | Wednesday 19 February 2025 09:14:42 +0000 (0:00:02.763) 0:05:16.008 **** 2025-02-19 09:17:10.403430 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:10.403435 | orchestrator | 2025-02-19 09:17:10.403440 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-02-19 09:17:10.403444 | orchestrator | Wednesday 19 February 2025 09:14:45 +0000 (0:00:02.641) 0:05:18.650 **** 2025-02-19 09:17:10.403449 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:10.403454 | orchestrator | 2025-02-19 09:17:10.403459 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-19 09:17:10.403466 | orchestrator | Wednesday 19 February 2025 09:15:25 +0000 (0:00:39.386) 0:05:58.036 **** 2025-02-19 09:17:10.403471 | orchestrator | 2025-02-19 09:17:10.403476 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-19 09:17:10.403481 | orchestrator | Wednesday 19 February 2025 09:15:25 +0000 (0:00:00.456) 0:05:58.493 **** 2025-02-19 09:17:10.403485 | orchestrator | 2025-02-19 09:17:10.403492 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-19 09:17:10.403497 | orchestrator | Wednesday 19 February 2025 09:15:25 +0000 (0:00:00.250) 0:05:58.743 **** 2025-02-19 09:17:10.403502 | orchestrator | 2025-02-19 09:17:10.403507 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-19 09:17:10.403514 | orchestrator | Wednesday 19 February 2025 09:15:25 +0000 (0:00:00.207) 0:05:58.950 **** 2025-02-19 09:17:10.403519 | orchestrator | 2025-02-19 09:17:10.403523 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-19 09:17:10.403528 | orchestrator | Wednesday 19 February 2025 09:15:26 +0000 (0:00:00.179) 0:05:59.130 **** 2025-02-19 09:17:10.403533 | orchestrator | 2025-02-19 09:17:10.403538 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-02-19 09:17:10.403542 | orchestrator | Wednesday 19 February 2025 09:15:26 +0000 (0:00:00.391) 0:05:59.521 **** 2025-02-19 09:17:10.403547 | orchestrator | 2025-02-19 09:17:10.403552 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-02-19 09:17:10.403557 | orchestrator | Wednesday 19 February 2025 09:15:26 +0000 (0:00:00.100) 0:05:59.622 **** 2025-02-19 09:17:10.403562 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:10.403567 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:17:10.403572 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:17:10.403576 | orchestrator | 2025-02-19 09:17:10.403581 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-02-19 09:17:10.403586 | orchestrator | Wednesday 19 February 2025 09:16:01 +0000 (0:00:34.769) 0:06:34.391 **** 2025-02-19 09:17:10.403591 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:17:10.403596 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:17:10.403600 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:17:10.403605 | orchestrator | 2025-02-19 09:17:10.403610 | orchestrator | RUNNING HANDLER [neutron : Restart ironic-neutron-agent container] ************* 2025-02-19 09:17:10.403615 | orchestrator | Wednesday 19 February 2025 09:16:43 +0000 (0:00:42.420) 0:07:16.812 **** 2025-02-19 09:17:10.403620 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:17:10.403625 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:10.403633 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:17:10.403638 | orchestrator | 2025-02-19 09:17:10.403643 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:17:10.403648 | orchestrator | testbed-node-0 : ok=29  changed=18  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-02-19 09:17:10.403654 | orchestrator | testbed-node-1 : ok=19  changed=11  unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-02-19 09:17:10.403660 | orchestrator | testbed-node-2 : ok=19  changed=11  unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-02-19 09:17:10.403665 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-02-19 09:17:10.403670 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-02-19 09:17:10.403674 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-02-19 09:17:10.403679 | orchestrator | 2025-02-19 09:17:10.403686 | orchestrator | 2025-02-19 09:17:10.403691 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:17:10.403696 | orchestrator | Wednesday 19 February 2025 09:17:08 +0000 (0:00:24.475) 0:07:41.288 **** 2025-02-19 09:17:10.403701 | orchestrator | =============================================================================== 2025-02-19 09:17:10.403706 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 42.42s 2025-02-19 09:17:10.403711 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.39s 2025-02-19 09:17:10.403716 | orchestrator | neutron : Restart neutron-server container ----------------------------- 34.77s 2025-02-19 09:17:10.403721 | orchestrator | neutron : Restart ironic-neutron-agent container ----------------------- 24.48s 2025-02-19 09:17:10.403725 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------ 16.03s 2025-02-19 09:17:10.403730 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 14.19s 2025-02-19 09:17:10.403735 | orchestrator | neutron : Copying over dhcp_agent.ini ---------------------------------- 11.16s 2025-02-19 09:17:10.403740 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------ 10.66s 2025-02-19 09:17:10.403745 | orchestrator | neutron : Copying over ssh key ----------------------------------------- 10.66s 2025-02-19 09:17:10.403750 | orchestrator | neutron : Copying over bgp_dragent.ini --------------------------------- 10.60s 2025-02-19 09:17:10.403755 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------ 10.20s 2025-02-19 09:17:10.403759 | orchestrator | service-ks-register : neutron | Granting user roles -------------------- 10.02s 2025-02-19 09:17:10.403764 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 9.92s 2025-02-19 09:17:10.403769 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 8.76s 2025-02-19 09:17:10.403776 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 8.34s 2025-02-19 09:17:10.403781 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.39s 2025-02-19 09:17:10.403786 | orchestrator | neutron : Copying over existing policy file ----------------------------- 7.36s 2025-02-19 09:17:10.403791 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 7.35s 2025-02-19 09:17:10.403796 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 7.35s 2025-02-19 09:17:10.403801 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 7.12s 2025-02-19 09:17:10.403807 | orchestrator | 2025-02-19 09:17:10 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:13.443798 | orchestrator | 2025-02-19 09:17:13 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:17:13.444735 | orchestrator | 2025-02-19 09:17:13 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:13.445264 | orchestrator | 2025-02-19 09:17:13 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:13.446967 | orchestrator | 2025-02-19 09:17:13 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:16.478085 | orchestrator | 2025-02-19 09:17:13 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:16.478237 | orchestrator | 2025-02-19 09:17:16 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state STARTED 2025-02-19 09:17:16.479835 | orchestrator | 2025-02-19 09:17:16 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:16.480753 | orchestrator | 2025-02-19 09:17:16 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:16.481759 | orchestrator | 2025-02-19 09:17:16 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:16.481924 | orchestrator | 2025-02-19 09:17:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:19.535554 | orchestrator | 2025-02-19 09:17:19 | INFO  | Task dd0b1963-dfea-4262-a4db-44fb88fdd6f7 is in state SUCCESS 2025-02-19 09:17:19.537859 | orchestrator | 2025-02-19 09:17:19.538111 | orchestrator | 2025-02-19 09:17:19.538144 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-02-19 09:17:19.538161 | orchestrator | 2025-02-19 09:17:19.538691 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-02-19 09:17:19.538711 | orchestrator | Wednesday 19 February 2025 09:15:09 +0000 (0:00:00.100) 0:00:00.100 **** 2025-02-19 09:17:19.538726 | orchestrator | changed: [localhost] 2025-02-19 09:17:19.538744 | orchestrator | 2025-02-19 09:17:19.538759 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-02-19 09:17:19.538788 | orchestrator | Wednesday 19 February 2025 09:15:10 +0000 (0:00:00.707) 0:00:00.807 **** 2025-02-19 09:17:19.538804 | orchestrator | changed: [localhost] 2025-02-19 09:17:19.538830 | orchestrator | 2025-02-19 09:17:19.538845 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-02-19 09:17:19.538859 | orchestrator | Wednesday 19 February 2025 09:15:46 +0000 (0:00:36.360) 0:00:37.168 **** 2025-02-19 09:17:19.538873 | orchestrator | changed: [localhost] 2025-02-19 09:17:19.538887 | orchestrator | 2025-02-19 09:17:19.538902 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:17:19.538916 | orchestrator | 2025-02-19 09:17:19.538930 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:17:19.538944 | orchestrator | Wednesday 19 February 2025 09:15:52 +0000 (0:00:05.416) 0:00:42.585 **** 2025-02-19 09:17:19.538959 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:17:19.538973 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:17:19.538987 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:17:19.539001 | orchestrator | 2025-02-19 09:17:19.539015 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:17:19.539029 | orchestrator | Wednesday 19 February 2025 09:15:52 +0000 (0:00:00.740) 0:00:43.325 **** 2025-02-19 09:17:19.539044 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_True) 2025-02-19 09:17:19.539058 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_True) 2025-02-19 09:17:19.539072 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_True) 2025-02-19 09:17:19.539086 | orchestrator | 2025-02-19 09:17:19.539100 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-02-19 09:17:19.539114 | orchestrator | 2025-02-19 09:17:19.539129 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-02-19 09:17:19.539143 | orchestrator | Wednesday 19 February 2025 09:15:54 +0000 (0:00:01.431) 0:00:44.757 **** 2025-02-19 09:17:19.539157 | orchestrator | included: /ansible/roles/ironic/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:17:19.539172 | orchestrator | 2025-02-19 09:17:19.539186 | orchestrator | TASK [service-ks-register : ironic | Creating services] ************************ 2025-02-19 09:17:19.539201 | orchestrator | Wednesday 19 February 2025 09:15:55 +0000 (0:00:00.745) 0:00:45.502 **** 2025-02-19 09:17:19.539216 | orchestrator | changed: [testbed-node-0] => (item=ironic (baremetal)) 2025-02-19 09:17:19.539230 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector (baremetal-introspection)) 2025-02-19 09:17:19.539245 | orchestrator | 2025-02-19 09:17:19.539259 | orchestrator | TASK [service-ks-register : ironic | Creating endpoints] *********************** 2025-02-19 09:17:19.539273 | orchestrator | Wednesday 19 February 2025 09:16:01 +0000 (0:00:06.834) 0:00:52.336 **** 2025-02-19 09:17:19.539287 | orchestrator | changed: [testbed-node-0] => (item=ironic -> https://api-int.testbed.osism.xyz:6385 -> internal) 2025-02-19 09:17:19.539301 | orchestrator | changed: [testbed-node-0] => (item=ironic -> https://api.testbed.osism.xyz:6385 -> public) 2025-02-19 09:17:19.539316 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> https://api-int.testbed.osism.xyz:5050 -> internal) 2025-02-19 09:17:19.539453 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> https://api.testbed.osism.xyz:5050 -> public) 2025-02-19 09:17:19.539502 | orchestrator | 2025-02-19 09:17:19.539518 | orchestrator | TASK [service-ks-register : ironic | Creating projects] ************************ 2025-02-19 09:17:19.539534 | orchestrator | Wednesday 19 February 2025 09:16:16 +0000 (0:00:14.356) 0:01:06.693 **** 2025-02-19 09:17:19.539549 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-19 09:17:19.539565 | orchestrator | 2025-02-19 09:17:19.539581 | orchestrator | TASK [service-ks-register : ironic | Creating users] *************************** 2025-02-19 09:17:19.539596 | orchestrator | Wednesday 19 February 2025 09:16:21 +0000 (0:00:04.808) 0:01:11.502 **** 2025-02-19 09:17:19.539611 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-19 09:17:19.539626 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service) 2025-02-19 09:17:19.539642 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service) 2025-02-19 09:17:19.539657 | orchestrator | 2025-02-19 09:17:19.539672 | orchestrator | TASK [service-ks-register : ironic | Creating roles] *************************** 2025-02-19 09:17:19.539688 | orchestrator | Wednesday 19 February 2025 09:16:28 +0000 (0:00:07.787) 0:01:19.289 **** 2025-02-19 09:17:19.539703 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-19 09:17:19.539719 | orchestrator | 2025-02-19 09:17:19.539749 | orchestrator | TASK [service-ks-register : ironic | Granting user roles] ********************** 2025-02-19 09:17:19.539765 | orchestrator | Wednesday 19 February 2025 09:16:32 +0000 (0:00:03.583) 0:01:22.873 **** 2025-02-19 09:17:19.539781 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service -> admin) 2025-02-19 09:17:19.539796 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service -> admin) 2025-02-19 09:17:19.539812 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service -> service) 2025-02-19 09:17:19.539828 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service -> service) 2025-02-19 09:17:19.539843 | orchestrator | 2025-02-19 09:17:19.539858 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-19 09:17:19.539874 | orchestrator | Wednesday 19 February 2025 09:16:50 +0000 (0:00:17.689) 0:01:40.562 **** 2025-02-19 09:17:19.539935 | orchestrator | changed: [testbed-node-2] => (item=iscsi_tcp) 2025-02-19 09:17:19.539954 | orchestrator | changed: [testbed-node-1] => (item=iscsi_tcp) 2025-02-19 09:17:19.539970 | orchestrator | changed: [testbed-node-0] => (item=iscsi_tcp) 2025-02-19 09:17:19.539986 | orchestrator | 2025-02-19 09:17:19.540002 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-19 09:17:19.540124 | orchestrator | Wednesday 19 February 2025 09:16:52 +0000 (0:00:01.918) 0:01:42.481 **** 2025-02-19 09:17:19.540142 | orchestrator | changed: [testbed-node-1] => (item=iscsi_tcp) 2025-02-19 09:17:19.540158 | orchestrator | changed: [testbed-node-2] => (item=iscsi_tcp) 2025-02-19 09:17:19.540173 | orchestrator | changed: [testbed-node-0] => (item=iscsi_tcp) 2025-02-19 09:17:19.540188 | orchestrator | 2025-02-19 09:17:19.540203 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-19 09:17:19.540219 | orchestrator | Wednesday 19 February 2025 09:16:54 +0000 (0:00:02.816) 0:01:45.298 **** 2025-02-19 09:17:19.540234 | orchestrator | skipping: [testbed-node-0] => (item=iscsi_tcp)  2025-02-19 09:17:19.540249 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:19.540265 | orchestrator | skipping: [testbed-node-1] => (item=iscsi_tcp)  2025-02-19 09:17:19.540280 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:19.540302 | orchestrator | skipping: [testbed-node-2] => (item=iscsi_tcp)  2025-02-19 09:17:19.540318 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:19.540388 | orchestrator | 2025-02-19 09:17:19.540405 | orchestrator | TASK [ironic : Ensuring config directories exist] ****************************** 2025-02-19 09:17:19.540419 | orchestrator | Wednesday 19 February 2025 09:16:56 +0000 (0:00:01.294) 0:01:46.593 **** 2025-02-19 09:17:19.540435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:17:19.540469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:17:19.540486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:19.540585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:17:19.540607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:19.540623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:17:19.540649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:19.540677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:17:19.540702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-19 09:17:19.540719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:17:19.540741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-19 09:17:19.540757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-19 09:17:19.540772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:17:19.540788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:17:19.540803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-19 09:17:19.540839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-19 09:17:19.540856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:17:19.540880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:17:19.540896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-19 09:17:19.540911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:17:19.540926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:17:19.540941 | orchestrator | 2025-02-19 09:17:19.540955 | orchestrator | TASK [ironic : Check if Ironic policies shall be overwritten] ****************** 2025-02-19 09:17:19.540970 | orchestrator | Wednesday 19 February 2025 09:16:59 +0000 (0:00:03.289) 0:01:49.882 **** 2025-02-19 09:17:19.540984 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:19.540999 | orchestrator | 2025-02-19 09:17:19.541013 | orchestrator | TASK [ironic : Check if Ironic Inspector policies shall be overwritten] ******** 2025-02-19 09:17:19.541027 | orchestrator | Wednesday 19 February 2025 09:16:59 +0000 (0:00:00.157) 0:01:50.039 **** 2025-02-19 09:17:19.541042 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:19.541056 | orchestrator | 2025-02-19 09:17:19.541070 | orchestrator | TASK [ironic : Set ironic policy file] ***************************************** 2025-02-19 09:17:19.541084 | orchestrator | Wednesday 19 February 2025 09:16:59 +0000 (0:00:00.172) 0:01:50.212 **** 2025-02-19 09:17:19.541099 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:19.541113 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:19.541127 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:19.541141 | orchestrator | 2025-02-19 09:17:19.541155 | orchestrator | TASK [ironic : Set ironic-inspector policy file] ******************************* 2025-02-19 09:17:19.541472 | orchestrator | Wednesday 19 February 2025 09:17:00 +0000 (0:00:00.474) 0:01:50.686 **** 2025-02-19 09:17:19.541493 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:19.541508 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:19.541523 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:19.541537 | orchestrator | 2025-02-19 09:17:19.541551 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-02-19 09:17:19.541566 | orchestrator | Wednesday 19 February 2025 09:17:00 +0000 (0:00:00.390) 0:01:51.077 **** 2025-02-19 09:17:19.541587 | orchestrator | included: /ansible/roles/ironic/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:17:19.541610 | orchestrator | 2025-02-19 09:17:19.541624 | orchestrator | TASK [service-cert-copy : ironic | Copying over extra CA certificates] ********* 2025-02-19 09:17:19.541638 | orchestrator | Wednesday 19 February 2025 09:17:01 +0000 (0:00:00.600) 0:01:51.678 **** 2025-02-19 09:17:19.541655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:17:19.541670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:17:19.541685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:17:19.541701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:19.541723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:19.541744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:19.541760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:17:19.541775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:17:19.541791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:17:19.541820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-19 09:17:19.541836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-19 09:17:19.541851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-02-19 09:17:19.541866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-19 09:17:19.541881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-19 09:17:19.541896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-19 09:17:19.541911 | orchestrator | 2025-02-19 09:17:19.541925 | orchestrator | TASK [service-cert-copy : ironic | Copying over backend internal TLS certificate] *** 2025-02-19 09:17:19.541940 | orchestrator | Wednesday 19 February 2025 09:17:06 +0000 (0:00:04.727) 0:01:56.406 **** 2025-02-19 09:17:19.541955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-19 09:17:19.541983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:17:19.542000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-19 09:17:19.542015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:17:19.542090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:17:19.542106 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:19.542121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-19 09:17:19.542156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:17:19.542172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-19 09:17:19.542186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:17:19.542201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:17:19.542216 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:19.542230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-19 09:17:19.542252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:17:19.542276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-19 09:17:19.542291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:17:19.542306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:17:19.542321 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:19.542355 | orchestrator | 2025-02-19 09:17:19.542370 | orchestrator | TASK [service-cert-copy : ironic | Copying over backend internal TLS key] ****** 2025-02-19 09:17:19.542384 | orchestrator | Wednesday 19 February 2025 09:17:08 +0000 (0:00:02.106) 0:01:58.512 **** 2025-02-19 09:17:19.542399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-19 09:17:19.542421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:17:19.542444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-19 09:17:19.542460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:17:19.542475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:17:19.542490 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:19.542505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-19 09:17:19.542528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:17:19.542550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-19 09:17:19.542565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:17:19.542580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:17:19.542596 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:19.542610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-19 09:17:19.542625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:17:19.542647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-19 09:17:19.542669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-19 09:17:19.542684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-19 09:17:19.542699 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:19.542714 | orchestrator | 2025-02-19 09:17:19.542728 | orchestrator | TASK [ironic : Copying over config.json files for services] ******************** 2025-02-19 09:17:19.542743 | orchestrator | Wednesday 19 February 2025 09:17:11 +0000 (0:00:03.231) 0:02:01.744 **** 2025-02-19 09:17:19.542757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:17:19.542773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:17:19.542794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-19 09:17:19.542823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:19.542839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:19.542854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:19.542868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:17:19.542889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:17:19.542911 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleError: An unhandled exception occurred while templating '{{ ironic_tftp_interface_address }}'. Error was a , original message: An unhandled exception occurred while templating '{{ 'ironic_tftp' | kolla_address }}'. Error was a , original message: Interface 'ironic-boot' not present on host 'testbed-node-1' 2025-02-19 09:17:19.542927 | orchestrator | failed: [testbed-node-1] (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "ironic-tftp", "value": {"container_name": "ironic_tftp", "dimensions": {}, "enabled": true, "environment": {"HTTPBOOT_PATH": "/var/lib/ironic/httpboot", "TFTPBOOT_PATH": "/var/lib/ironic/tftpboot"}, "group": "ironic-tftp", "image": "registry.osism.tech/kolla/ironic-pxe:2024.1", "volumes": ["/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "ironic:/var/lib/ironic", "kolla_logs:/var/log/kolla"]}}, "msg": "AnsibleError: An unhandled exception occurred while templating '{{ ironic_tftp_interface_address }}'. Error was a , original message: An unhandled exception occurred while templating '{{ 'ironic_tftp' | kolla_address }}'. Error was a , original message: Interface 'ironic-boot' not present on host 'testbed-node-1'"} 2025-02-19 09:17:19.542944 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleError: An unhandled exception occurred while templating '{{ ironic_tftp_interface_address }}'. Error was a , original message: An unhandled exception occurred while templating '{{ 'ironic_tftp' | kolla_address }}'. Error was a , original message: Interface 'ironic-boot' not present on host 'testbed-node-0' 2025-02-19 09:17:19.542966 | orchestrator | failed: [testbed-node-0] (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "ironic-tftp", "value": {"container_name": "ironic_tftp", "dimensions": {}, "enabled": true, "environment": {"HTTPBOOT_PATH": "/var/lib/ironic/httpboot", "TFTPBOOT_PATH": "/var/lib/ironic/tftpboot"}, "group": "ironic-tftp", "image": "registry.osism.tech/kolla/ironic-pxe:2024.1", "volumes": ["/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "ironic:/var/lib/ironic", "kolla_logs:/var/log/kolla"]}}, "msg": "AnsibleError: An unhandled exception occurred while templating '{{ ironic_tftp_interface_address }}'. Error was a , original message: An unhandled exception occurred while templating '{{ 'ironic_tftp' | kolla_address }}'. Error was a , original message: Interface 'ironic-boot' not present on host 'testbed-node-0'"} 2025-02-19 09:17:19.542990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-19 09:17:19.543005 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleError: An unhandled exception occurred while templating '{{ ironic_tftp_interface_address }}'. Error was a , original message: An unhandled exception occurred while templating '{{ 'ironic_tftp' | kolla_address }}'. Error was a , original message: Interface 'ironic-boot' not present on host 'testbed-node-2' 2025-02-19 09:17:19.543021 | orchestrator | failed: [testbed-node-2] (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "ironic-tftp", "value": {"container_name": "ironic_tftp", "dimensions": {}, "enabled": true, "environment": {"HTTPBOOT_PATH": "/var/lib/ironic/httpboot", "TFTPBOOT_PATH": "/var/lib/ironic/tftpboot"}, "group": "ironic-tftp", "image": "registry.osism.tech/kolla/ironic-pxe:2024.1", "volumes": ["/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "ironic:/var/lib/ironic", "kolla_logs:/var/log/kolla"]}}, "msg": "AnsibleError: An unhandled exception occurred while templating '{{ ironic_tftp_interface_address }}'. Error was a , original message: An unhandled exception occurred while templating '{{ 'ironic_tftp' | kolla_address }}'. Error was a , original message: Interface 'ironic-boot' not present on host 'testbed-node-2'"} 2025-02-19 09:17:19.543043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-19 09:17:19.543058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:17:19.543079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:17:19.543094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-19 09:17:19.543108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:17:19.543123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:17:19.543145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'registry.osism.tech/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-02-19 09:17:19.543160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'registry.osism.tech/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-19 09:17:19.543175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'registry.osism.tech/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-19 09:17:19.543189 | orchestrator | 2025-02-19 09:17:19.543204 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:17:19.543218 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:17:19.543233 | orchestrator | testbed-node-0 : ok=14  changed=8  unreachable=0 failed=1  skipped=7  rescued=0 ignored=0 2025-02-19 09:17:19.543249 | orchestrator | testbed-node-1 : ok=8  changed=4  unreachable=0 failed=1  skipped=5  rescued=0 ignored=0 2025-02-19 09:17:19.543263 | orchestrator | testbed-node-2 : ok=8  changed=4  unreachable=0 failed=1  skipped=5  rescued=0 ignored=0 2025-02-19 09:17:19.543277 | orchestrator | 2025-02-19 09:17:19.543292 | orchestrator | 2025-02-19 09:17:19.543311 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:17:22.578515 | orchestrator | Wednesday 19 February 2025 09:17:16 +0000 (0:00:05.560) 0:02:07.304 **** 2025-02-19 09:17:22.578665 | orchestrator | =============================================================================== 2025-02-19 09:17:22.578702 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 36.36s 2025-02-19 09:17:22.578748 | orchestrator | service-ks-register : ironic | Granting user roles --------------------- 17.69s 2025-02-19 09:17:22.578946 | orchestrator | service-ks-register : ironic | Creating endpoints ---------------------- 14.36s 2025-02-19 09:17:22.578976 | orchestrator | service-ks-register : ironic | Creating users --------------------------- 7.79s 2025-02-19 09:17:22.578995 | orchestrator | service-ks-register : ironic | Creating services ------------------------ 6.84s 2025-02-19 09:17:22.579013 | orchestrator | ironic : Copying over config.json files for services -------------------- 5.56s 2025-02-19 09:17:22.579034 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.42s 2025-02-19 09:17:22.579056 | orchestrator | service-ks-register : ironic | Creating projects ------------------------ 4.81s 2025-02-19 09:17:22.579077 | orchestrator | service-cert-copy : ironic | Copying over extra CA certificates --------- 4.73s 2025-02-19 09:17:22.579132 | orchestrator | service-ks-register : ironic | Creating roles --------------------------- 3.58s 2025-02-19 09:17:22.579157 | orchestrator | ironic : Ensuring config directories exist ------------------------------ 3.29s 2025-02-19 09:17:22.579176 | orchestrator | service-cert-copy : ironic | Copying over backend internal TLS key ------ 3.23s 2025-02-19 09:17:22.579204 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.82s 2025-02-19 09:17:22.579225 | orchestrator | service-cert-copy : ironic | Copying over backend internal TLS certificate --- 2.11s 2025-02-19 09:17:22.579246 | orchestrator | module-load : Load modules ---------------------------------------------- 1.92s 2025-02-19 09:17:22.579275 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.43s 2025-02-19 09:17:22.579297 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.29s 2025-02-19 09:17:22.579321 | orchestrator | ironic : include_tasks -------------------------------------------------- 0.75s 2025-02-19 09:17:22.579391 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2025-02-19 09:17:22.579421 | orchestrator | Ensure the destination directory exists --------------------------------- 0.71s 2025-02-19 09:17:22.579440 | orchestrator | 2025-02-19 09:17:19 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:22.579459 | orchestrator | 2025-02-19 09:17:19 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:22.579478 | orchestrator | 2025-02-19 09:17:19 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:22.579499 | orchestrator | 2025-02-19 09:17:19 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:22.579521 | orchestrator | 2025-02-19 09:17:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:22.579573 | orchestrator | 2025-02-19 09:17:22 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:22.579961 | orchestrator | 2025-02-19 09:17:22 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:22.580016 | orchestrator | 2025-02-19 09:17:22 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:22.581966 | orchestrator | 2025-02-19 09:17:22 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:25.622558 | orchestrator | 2025-02-19 09:17:22 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:25.622714 | orchestrator | 2025-02-19 09:17:25 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:25.624289 | orchestrator | 2025-02-19 09:17:25 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:25.625739 | orchestrator | 2025-02-19 09:17:25 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:25.627619 | orchestrator | 2025-02-19 09:17:25 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:28.666300 | orchestrator | 2025-02-19 09:17:25 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:28.666578 | orchestrator | 2025-02-19 09:17:28 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:28.667297 | orchestrator | 2025-02-19 09:17:28 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:28.668611 | orchestrator | 2025-02-19 09:17:28 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:28.669955 | orchestrator | 2025-02-19 09:17:28 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:31.701067 | orchestrator | 2025-02-19 09:17:28 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:31.701243 | orchestrator | 2025-02-19 09:17:31 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:31.704044 | orchestrator | 2025-02-19 09:17:31 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:31.704094 | orchestrator | 2025-02-19 09:17:31 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:31.707792 | orchestrator | 2025-02-19 09:17:31 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:34.744050 | orchestrator | 2025-02-19 09:17:31 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:34.744186 | orchestrator | 2025-02-19 09:17:34 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:34.744521 | orchestrator | 2025-02-19 09:17:34 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:34.745367 | orchestrator | 2025-02-19 09:17:34 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:34.746489 | orchestrator | 2025-02-19 09:17:34 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:37.784075 | orchestrator | 2025-02-19 09:17:34 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:37.784222 | orchestrator | 2025-02-19 09:17:37 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:37.788758 | orchestrator | 2025-02-19 09:17:37 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:37.790567 | orchestrator | 2025-02-19 09:17:37 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:37.794247 | orchestrator | 2025-02-19 09:17:37 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:40.849132 | orchestrator | 2025-02-19 09:17:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:40.849264 | orchestrator | 2025-02-19 09:17:40 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:40.849599 | orchestrator | 2025-02-19 09:17:40 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:40.850724 | orchestrator | 2025-02-19 09:17:40 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:40.851980 | orchestrator | 2025-02-19 09:17:40 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:43.894509 | orchestrator | 2025-02-19 09:17:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:43.894612 | orchestrator | 2025-02-19 09:17:43 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:43.895529 | orchestrator | 2025-02-19 09:17:43 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:43.896911 | orchestrator | 2025-02-19 09:17:43 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:43.898179 | orchestrator | 2025-02-19 09:17:43 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:43.898473 | orchestrator | 2025-02-19 09:17:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:46.939778 | orchestrator | 2025-02-19 09:17:46 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:46.942394 | orchestrator | 2025-02-19 09:17:46 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:46.943202 | orchestrator | 2025-02-19 09:17:46 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:46.944602 | orchestrator | 2025-02-19 09:17:46 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:46.944791 | orchestrator | 2025-02-19 09:17:46 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:49.980498 | orchestrator | 2025-02-19 09:17:49 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:49.982655 | orchestrator | 2025-02-19 09:17:49 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:49.982708 | orchestrator | 2025-02-19 09:17:49 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:49.982734 | orchestrator | 2025-02-19 09:17:49 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:53.015168 | orchestrator | 2025-02-19 09:17:49 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:53.015312 | orchestrator | 2025-02-19 09:17:53 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:53.017184 | orchestrator | 2025-02-19 09:17:53 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:53.018545 | orchestrator | 2025-02-19 09:17:53 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:53.018615 | orchestrator | 2025-02-19 09:17:53 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:56.050123 | orchestrator | 2025-02-19 09:17:53 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:56.050273 | orchestrator | 2025-02-19 09:17:56 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state STARTED 2025-02-19 09:17:56.051587 | orchestrator | 2025-02-19 09:17:56 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:17:56.052194 | orchestrator | 2025-02-19 09:17:56 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:17:56.052232 | orchestrator | 2025-02-19 09:17:56 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:17:56.052547 | orchestrator | 2025-02-19 09:17:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:17:59.083666 | orchestrator | 2025-02-19 09:17:59 | INFO  | Task b893390e-b4ff-4f52-96b6-f0429252961c is in state SUCCESS 2025-02-19 09:17:59.086197 | orchestrator | 2025-02-19 09:17:59.086278 | orchestrator | 2025-02-19 09:17:59.086291 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:17:59.086300 | orchestrator | 2025-02-19 09:17:59.086309 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:17:59.086317 | orchestrator | Wednesday 19 February 2025 09:14:27 +0000 (0:00:00.470) 0:00:00.470 **** 2025-02-19 09:17:59.086324 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:17:59.086352 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:17:59.086361 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:17:59.086368 | orchestrator | 2025-02-19 09:17:59.086376 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:17:59.086383 | orchestrator | Wednesday 19 February 2025 09:14:28 +0000 (0:00:00.806) 0:00:01.276 **** 2025-02-19 09:17:59.086392 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-02-19 09:17:59.086416 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-02-19 09:17:59.086425 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-02-19 09:17:59.086432 | orchestrator | 2025-02-19 09:17:59.086440 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-02-19 09:17:59.086447 | orchestrator | 2025-02-19 09:17:59.086455 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-19 09:17:59.086466 | orchestrator | Wednesday 19 February 2025 09:14:30 +0000 (0:00:02.518) 0:00:03.794 **** 2025-02-19 09:17:59.086479 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:17:59.086514 | orchestrator | 2025-02-19 09:17:59.086528 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-02-19 09:17:59.086540 | orchestrator | Wednesday 19 February 2025 09:14:32 +0000 (0:00:01.954) 0:00:05.749 **** 2025-02-19 09:17:59.086548 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-02-19 09:17:59.086555 | orchestrator | 2025-02-19 09:17:59.086563 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-02-19 09:17:59.086570 | orchestrator | Wednesday 19 February 2025 09:14:37 +0000 (0:00:04.810) 0:00:10.560 **** 2025-02-19 09:17:59.086581 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-02-19 09:17:59.086594 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-02-19 09:17:59.086607 | orchestrator | 2025-02-19 09:17:59.086620 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-02-19 09:17:59.086629 | orchestrator | Wednesday 19 February 2025 09:14:45 +0000 (0:00:08.201) 0:00:18.761 **** 2025-02-19 09:17:59.086636 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-19 09:17:59.086644 | orchestrator | 2025-02-19 09:17:59.086651 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-02-19 09:17:59.086659 | orchestrator | Wednesday 19 February 2025 09:14:49 +0000 (0:00:04.219) 0:00:22.981 **** 2025-02-19 09:17:59.086666 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-19 09:17:59.086674 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-02-19 09:17:59.086681 | orchestrator | 2025-02-19 09:17:59.086689 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-02-19 09:17:59.086696 | orchestrator | Wednesday 19 February 2025 09:14:54 +0000 (0:00:04.897) 0:00:27.879 **** 2025-02-19 09:17:59.086704 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-19 09:17:59.086711 | orchestrator | 2025-02-19 09:17:59.086718 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-02-19 09:17:59.086726 | orchestrator | Wednesday 19 February 2025 09:14:58 +0000 (0:00:04.034) 0:00:31.914 **** 2025-02-19 09:17:59.086733 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-02-19 09:17:59.086741 | orchestrator | 2025-02-19 09:17:59.086748 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-02-19 09:17:59.086756 | orchestrator | Wednesday 19 February 2025 09:15:03 +0000 (0:00:04.446) 0:00:36.360 **** 2025-02-19 09:17:59.086767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.086800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.086822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.086835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.086996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087045 | orchestrator | 2025-02-19 09:17:59.087053 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-02-19 09:17:59.087061 | orchestrator | Wednesday 19 February 2025 09:15:06 +0000 (0:00:03.461) 0:00:39.822 **** 2025-02-19 09:17:59.087068 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:59.087076 | orchestrator | 2025-02-19 09:17:59.087084 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-02-19 09:17:59.087094 | orchestrator | Wednesday 19 February 2025 09:15:06 +0000 (0:00:00.166) 0:00:39.988 **** 2025-02-19 09:17:59.087103 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:59.087116 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:59.087128 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:59.087141 | orchestrator | 2025-02-19 09:17:59.087154 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-19 09:17:59.087162 | orchestrator | Wednesday 19 February 2025 09:15:07 +0000 (0:00:00.457) 0:00:40.446 **** 2025-02-19 09:17:59.087170 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:17:59.087180 | orchestrator | 2025-02-19 09:17:59.087192 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-02-19 09:17:59.087204 | orchestrator | Wednesday 19 February 2025 09:15:08 +0000 (0:00:00.744) 0:00:41.190 **** 2025-02-19 09:17:59.087217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.087229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.087243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.087255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.087454 | orchestrator | 2025-02-19 09:17:59.087461 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-02-19 09:17:59.087752 | orchestrator | Wednesday 19 February 2025 09:15:14 +0000 (0:00:06.702) 0:00:47.893 **** 2025-02-19 09:17:59.087773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.087782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:17:59.087790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087830 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:59.087862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.087872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:17:59.087879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087915 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:59.087939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.087948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:17:59.087957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.087996 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:59.088004 | orchestrator | 2025-02-19 09:17:59.088013 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-02-19 09:17:59.088021 | orchestrator | Wednesday 19 February 2025 09:15:15 +0000 (0:00:00.858) 0:00:48.751 **** 2025-02-19 09:17:59.088055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.088064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:17:59.088073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088113 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:59.088138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.088148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:17:59.088156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088191 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:59.088215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.088224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:17:59.088232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088267 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:59.088275 | orchestrator | 2025-02-19 09:17:59.088283 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-02-19 09:17:59.088290 | orchestrator | Wednesday 19 February 2025 09:15:16 +0000 (0:00:01.116) 0:00:49.868 **** 2025-02-19 09:17:59.088315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.088324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.088331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.088360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088571 | orchestrator | 2025-02-19 09:17:59.088580 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-02-19 09:17:59.088588 | orchestrator | Wednesday 19 February 2025 09:15:22 +0000 (0:00:05.812) 0:00:55.680 **** 2025-02-19 09:17:59.088597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.088611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.088620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.088629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.088866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.088873 | orchestrator | 2025-02-19 09:17:59.088881 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-02-19 09:17:59.088889 | orchestrator | Wednesday 19 February 2025 09:15:52 +0000 (0:00:30.325) 0:01:26.006 **** 2025-02-19 09:17:59.088896 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-02-19 09:17:59.088904 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-02-19 09:17:59.088912 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-02-19 09:17:59.088919 | orchestrator | 2025-02-19 09:17:59.088927 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-02-19 09:17:59.088941 | orchestrator | Wednesday 19 February 2025 09:15:59 +0000 (0:00:06.852) 0:01:32.858 **** 2025-02-19 09:17:59.088948 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-02-19 09:17:59.088956 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-02-19 09:17:59.088964 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-02-19 09:17:59.088971 | orchestrator | 2025-02-19 09:17:59.088978 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-02-19 09:17:59.088986 | orchestrator | Wednesday 19 February 2025 09:16:05 +0000 (0:00:06.097) 0:01:38.955 **** 2025-02-19 09:17:59.088994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.089005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.089019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.089027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089188 | orchestrator | 2025-02-19 09:17:59.089196 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-02-19 09:17:59.089204 | orchestrator | Wednesday 19 February 2025 09:16:10 +0000 (0:00:04.579) 0:01:43.535 **** 2025-02-19 09:17:59.089216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.089224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.089232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.089240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089421 | orchestrator | 2025-02-19 09:17:59.089429 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-19 09:17:59.089437 | orchestrator | Wednesday 19 February 2025 09:16:15 +0000 (0:00:05.378) 0:01:48.913 **** 2025-02-19 09:17:59.089444 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:59.089452 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:59.089459 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:59.089466 | orchestrator | 2025-02-19 09:17:59.089474 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-02-19 09:17:59.089481 | orchestrator | Wednesday 19 February 2025 09:16:16 +0000 (0:00:00.608) 0:01:49.522 **** 2025-02-19 09:17:59.089489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.089500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:17:59.089508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089552 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:59.089560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.089573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:17:59.089581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089652 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:59.089660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-19 09:17:59.089671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-19 09:17:59.089679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089734 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:59.089741 | orchestrator | 2025-02-19 09:17:59.089749 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-02-19 09:17:59.089757 | orchestrator | Wednesday 19 February 2025 09:16:19 +0000 (0:00:02.797) 0:01:52.320 **** 2025-02-19 09:17:59.089764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.089772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.089790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-19 09:17:59.089798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:17:59.089963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-19 09:17:59.089971 | orchestrator | 2025-02-19 09:17:59.089978 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-19 09:17:59.089986 | orchestrator | Wednesday 19 February 2025 09:16:27 +0000 (0:00:08.365) 0:02:00.685 **** 2025-02-19 09:17:59.089993 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:17:59.090001 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:17:59.090008 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:17:59.090061 | orchestrator | 2025-02-19 09:17:59.090077 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-02-19 09:17:59.090089 | orchestrator | Wednesday 19 February 2025 09:16:28 +0000 (0:00:00.397) 0:02:01.083 **** 2025-02-19 09:17:59.090102 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-02-19 09:17:59.090114 | orchestrator | 2025-02-19 09:17:59.090124 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-02-19 09:17:59.090132 | orchestrator | Wednesday 19 February 2025 09:16:30 +0000 (0:00:02.385) 0:02:03.469 **** 2025-02-19 09:17:59.090140 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-19 09:17:59.090152 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-02-19 09:17:59.090160 | orchestrator | 2025-02-19 09:17:59.090167 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-02-19 09:17:59.090180 | orchestrator | Wednesday 19 February 2025 09:16:33 +0000 (0:00:02.865) 0:02:06.334 **** 2025-02-19 09:17:59.090187 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:59.090195 | orchestrator | 2025-02-19 09:17:59.090202 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-02-19 09:17:59.090210 | orchestrator | Wednesday 19 February 2025 09:16:49 +0000 (0:00:16.437) 0:02:22.772 **** 2025-02-19 09:17:59.090217 | orchestrator | 2025-02-19 09:17:59.090225 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-02-19 09:17:59.090233 | orchestrator | Wednesday 19 February 2025 09:16:50 +0000 (0:00:00.481) 0:02:23.254 **** 2025-02-19 09:17:59.090240 | orchestrator | 2025-02-19 09:17:59.090248 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-02-19 09:17:59.090255 | orchestrator | Wednesday 19 February 2025 09:16:50 +0000 (0:00:00.230) 0:02:23.485 **** 2025-02-19 09:17:59.090263 | orchestrator | 2025-02-19 09:17:59.090270 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-02-19 09:17:59.090278 | orchestrator | Wednesday 19 February 2025 09:16:50 +0000 (0:00:00.243) 0:02:23.728 **** 2025-02-19 09:17:59.090285 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:59.090293 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:17:59.090301 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:17:59.090313 | orchestrator | 2025-02-19 09:17:59.090321 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-02-19 09:17:59.090330 | orchestrator | Wednesday 19 February 2025 09:17:07 +0000 (0:00:16.376) 0:02:40.105 **** 2025-02-19 09:17:59.090357 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:59.090366 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:17:59.090374 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:17:59.090382 | orchestrator | 2025-02-19 09:17:59.090390 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-02-19 09:17:59.090398 | orchestrator | Wednesday 19 February 2025 09:17:16 +0000 (0:00:09.295) 0:02:49.400 **** 2025-02-19 09:17:59.090406 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:17:59.090413 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:59.090421 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:17:59.090429 | orchestrator | 2025-02-19 09:17:59.090437 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-02-19 09:17:59.090448 | orchestrator | Wednesday 19 February 2025 09:17:30 +0000 (0:00:13.807) 0:03:03.208 **** 2025-02-19 09:17:59.090456 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:59.090464 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:17:59.090472 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:17:59.090480 | orchestrator | 2025-02-19 09:17:59.090487 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-02-19 09:17:59.090495 | orchestrator | Wednesday 19 February 2025 09:17:36 +0000 (0:00:06.348) 0:03:09.556 **** 2025-02-19 09:17:59.090503 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:17:59.090511 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:17:59.090519 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:59.090527 | orchestrator | 2025-02-19 09:17:59.090535 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-02-19 09:17:59.090543 | orchestrator | Wednesday 19 February 2025 09:17:45 +0000 (0:00:09.096) 0:03:18.653 **** 2025-02-19 09:17:59.090551 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:59.090559 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:17:59.090567 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:17:59.090575 | orchestrator | 2025-02-19 09:17:59.090583 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-02-19 09:17:59.090591 | orchestrator | Wednesday 19 February 2025 09:17:53 +0000 (0:00:07.782) 0:03:26.436 **** 2025-02-19 09:17:59.090598 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:17:59.090606 | orchestrator | 2025-02-19 09:17:59.090614 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:17:59.090627 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-19 09:17:59.090636 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 09:17:59.090644 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 09:17:59.090652 | orchestrator | 2025-02-19 09:17:59.090660 | orchestrator | 2025-02-19 09:17:59.090667 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:17:59.090675 | orchestrator | Wednesday 19 February 2025 09:17:58 +0000 (0:00:05.071) 0:03:31.508 **** 2025-02-19 09:17:59.090683 | orchestrator | =============================================================================== 2025-02-19 09:17:59.090691 | orchestrator | designate : Copying over designate.conf -------------------------------- 30.33s 2025-02-19 09:17:59.090699 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.44s 2025-02-19 09:17:59.090707 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 16.38s 2025-02-19 09:17:59.090715 | orchestrator | designate : Restart designate-central container ------------------------ 13.81s 2025-02-19 09:17:59.090723 | orchestrator | designate : Restart designate-api container ----------------------------- 9.30s 2025-02-19 09:17:59.090731 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.10s 2025-02-19 09:17:59.090739 | orchestrator | designate : Check designate containers ---------------------------------- 8.37s 2025-02-19 09:17:59.090747 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 8.20s 2025-02-19 09:17:59.090755 | orchestrator | designate : Restart designate-worker container -------------------------- 7.78s 2025-02-19 09:17:59.090767 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.85s 2025-02-19 09:18:02.137556 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.70s 2025-02-19 09:18:02.137682 | orchestrator | designate : Restart designate-producer container ------------------------ 6.35s 2025-02-19 09:18:02.137700 | orchestrator | designate : Copying over named.conf ------------------------------------- 6.10s 2025-02-19 09:18:02.137710 | orchestrator | designate : Copying over config.json files for services ----------------- 5.81s 2025-02-19 09:18:02.137720 | orchestrator | designate : Copying over rndc.key --------------------------------------- 5.38s 2025-02-19 09:18:02.137729 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.07s 2025-02-19 09:18:02.137738 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.90s 2025-02-19 09:18:02.137747 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.81s 2025-02-19 09:18:02.137756 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.58s 2025-02-19 09:18:02.137765 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.45s 2025-02-19 09:18:02.137774 | orchestrator | 2025-02-19 09:17:59 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:02.137783 | orchestrator | 2025-02-19 09:17:59 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:18:02.137792 | orchestrator | 2025-02-19 09:17:59 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:02.137801 | orchestrator | 2025-02-19 09:17:59 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:02.137825 | orchestrator | 2025-02-19 09:18:02 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:02.138557 | orchestrator | 2025-02-19 09:18:02 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:02.139887 | orchestrator | 2025-02-19 09:18:02 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:18:02.141327 | orchestrator | 2025-02-19 09:18:02 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:02.141548 | orchestrator | 2025-02-19 09:18:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:05.191848 | orchestrator | 2025-02-19 09:18:05 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:05.192672 | orchestrator | 2025-02-19 09:18:05 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:05.194283 | orchestrator | 2025-02-19 09:18:05 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:18:05.196680 | orchestrator | 2025-02-19 09:18:05 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:08.252711 | orchestrator | 2025-02-19 09:18:05 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:08.252817 | orchestrator | 2025-02-19 09:18:08 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:08.254235 | orchestrator | 2025-02-19 09:18:08 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:08.256659 | orchestrator | 2025-02-19 09:18:08 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:18:08.260934 | orchestrator | 2025-02-19 09:18:08 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:11.300272 | orchestrator | 2025-02-19 09:18:08 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:11.300512 | orchestrator | 2025-02-19 09:18:11 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:11.300974 | orchestrator | 2025-02-19 09:18:11 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:11.302798 | orchestrator | 2025-02-19 09:18:11 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:18:11.303906 | orchestrator | 2025-02-19 09:18:11 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:14.353989 | orchestrator | 2025-02-19 09:18:11 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:14.354322 | orchestrator | 2025-02-19 09:18:14 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:14.355899 | orchestrator | 2025-02-19 09:18:14 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:14.357864 | orchestrator | 2025-02-19 09:18:14 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:18:14.359278 | orchestrator | 2025-02-19 09:18:14 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:17.390536 | orchestrator | 2025-02-19 09:18:14 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:17.390633 | orchestrator | 2025-02-19 09:18:17 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:17.391522 | orchestrator | 2025-02-19 09:18:17 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:17.392465 | orchestrator | 2025-02-19 09:18:17 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:18:17.393258 | orchestrator | 2025-02-19 09:18:17 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:20.431578 | orchestrator | 2025-02-19 09:18:17 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:20.431735 | orchestrator | 2025-02-19 09:18:20 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:20.432047 | orchestrator | 2025-02-19 09:18:20 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:20.432109 | orchestrator | 2025-02-19 09:18:20 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:18:20.433535 | orchestrator | 2025-02-19 09:18:20 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:23.473913 | orchestrator | 2025-02-19 09:18:20 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:23.474140 | orchestrator | 2025-02-19 09:18:23 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:23.475282 | orchestrator | 2025-02-19 09:18:23 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:23.480785 | orchestrator | 2025-02-19 09:18:23 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:18:23.480886 | orchestrator | 2025-02-19 09:18:23 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:26.531230 | orchestrator | 2025-02-19 09:18:23 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:26.531330 | orchestrator | 2025-02-19 09:18:26 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:26.531984 | orchestrator | 2025-02-19 09:18:26 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:26.533634 | orchestrator | 2025-02-19 09:18:26 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:18:26.535534 | orchestrator | 2025-02-19 09:18:26 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:29.586167 | orchestrator | 2025-02-19 09:18:26 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:29.586321 | orchestrator | 2025-02-19 09:18:29 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:29.586521 | orchestrator | 2025-02-19 09:18:29 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:29.587499 | orchestrator | 2025-02-19 09:18:29 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state STARTED 2025-02-19 09:18:29.588419 | orchestrator | 2025-02-19 09:18:29 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:32.617994 | orchestrator | 2025-02-19 09:18:29 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:32.618239 | orchestrator | 2025-02-19 09:18:32 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:32.618662 | orchestrator | 2025-02-19 09:18:32 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:32.619912 | orchestrator | 2025-02-19 09:18:32 | INFO  | Task 20e0e440-8cd4-4a94-9020-93908184051f is in state SUCCESS 2025-02-19 09:18:32.620976 | orchestrator | 2025-02-19 09:18:32.621032 | orchestrator | 2025-02-19 09:18:32.621054 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:18:32.621076 | orchestrator | 2025-02-19 09:18:32.621119 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:18:32.621143 | orchestrator | Wednesday 19 February 2025 09:17:15 +0000 (0:00:00.318) 0:00:00.318 **** 2025-02-19 09:18:32.621164 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:18:32.621188 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:18:32.621209 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:18:32.621230 | orchestrator | 2025-02-19 09:18:32.621252 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:18:32.621274 | orchestrator | Wednesday 19 February 2025 09:17:16 +0000 (0:00:00.426) 0:00:00.744 **** 2025-02-19 09:18:32.621326 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-02-19 09:18:32.621415 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-02-19 09:18:32.621621 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-02-19 09:18:32.621639 | orchestrator | 2025-02-19 09:18:32.621652 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-02-19 09:18:32.621665 | orchestrator | 2025-02-19 09:18:32.621678 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-02-19 09:18:32.621692 | orchestrator | Wednesday 19 February 2025 09:17:16 +0000 (0:00:00.347) 0:00:01.092 **** 2025-02-19 09:18:32.621705 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:18:32.621720 | orchestrator | 2025-02-19 09:18:32.621733 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-02-19 09:18:32.621746 | orchestrator | Wednesday 19 February 2025 09:17:17 +0000 (0:00:01.019) 0:00:02.111 **** 2025-02-19 09:18:32.621759 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-02-19 09:18:32.621772 | orchestrator | 2025-02-19 09:18:32.621785 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-02-19 09:18:32.621798 | orchestrator | Wednesday 19 February 2025 09:17:22 +0000 (0:00:04.791) 0:00:06.903 **** 2025-02-19 09:18:32.621811 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-02-19 09:18:32.621825 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-02-19 09:18:32.621838 | orchestrator | 2025-02-19 09:18:32.621851 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-02-19 09:18:32.621864 | orchestrator | Wednesday 19 February 2025 09:17:29 +0000 (0:00:06.823) 0:00:13.726 **** 2025-02-19 09:18:32.621877 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-19 09:18:32.621891 | orchestrator | 2025-02-19 09:18:32.621904 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-02-19 09:18:32.621917 | orchestrator | Wednesday 19 February 2025 09:17:32 +0000 (0:00:03.584) 0:00:17.311 **** 2025-02-19 09:18:32.621930 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-19 09:18:32.621943 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-02-19 09:18:32.621956 | orchestrator | 2025-02-19 09:18:32.621969 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-02-19 09:18:32.621982 | orchestrator | Wednesday 19 February 2025 09:17:37 +0000 (0:00:04.227) 0:00:21.539 **** 2025-02-19 09:18:32.621995 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-19 09:18:32.622008 | orchestrator | 2025-02-19 09:18:32.622077 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-02-19 09:18:32.622102 | orchestrator | Wednesday 19 February 2025 09:17:41 +0000 (0:00:04.526) 0:00:26.066 **** 2025-02-19 09:18:32.622123 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-02-19 09:18:32.622147 | orchestrator | 2025-02-19 09:18:32.622171 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-02-19 09:18:32.622194 | orchestrator | Wednesday 19 February 2025 09:17:48 +0000 (0:00:06.586) 0:00:32.652 **** 2025-02-19 09:18:32.622208 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:18:32.622220 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:18:32.622232 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:18:32.622245 | orchestrator | 2025-02-19 09:18:32.622259 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-02-19 09:18:32.622273 | orchestrator | Wednesday 19 February 2025 09:17:48 +0000 (0:00:00.594) 0:00:33.247 **** 2025-02-19 09:18:32.622291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.622402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.622424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.622439 | orchestrator | 2025-02-19 09:18:32.622453 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-02-19 09:18:32.622467 | orchestrator | Wednesday 19 February 2025 09:17:49 +0000 (0:00:01.005) 0:00:34.253 **** 2025-02-19 09:18:32.622481 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:18:32.622495 | orchestrator | 2025-02-19 09:18:32.622509 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-02-19 09:18:32.622523 | orchestrator | Wednesday 19 February 2025 09:17:49 +0000 (0:00:00.108) 0:00:34.361 **** 2025-02-19 09:18:32.622537 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:18:32.622551 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:18:32.622564 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:18:32.622578 | orchestrator | 2025-02-19 09:18:32.622592 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-02-19 09:18:32.622606 | orchestrator | Wednesday 19 February 2025 09:17:50 +0000 (0:00:00.354) 0:00:34.716 **** 2025-02-19 09:18:32.622619 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:18:32.622631 | orchestrator | 2025-02-19 09:18:32.622643 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-02-19 09:18:32.622655 | orchestrator | Wednesday 19 February 2025 09:17:50 +0000 (0:00:00.651) 0:00:35.367 **** 2025-02-19 09:18:32.622667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.622702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.622731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.622754 | orchestrator | 2025-02-19 09:18:32.622773 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-02-19 09:18:32.622792 | orchestrator | Wednesday 19 February 2025 09:17:52 +0000 (0:00:01.549) 0:00:36.917 **** 2025-02-19 09:18:32.622814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:18:32.622834 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:18:32.622855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:18:32.622888 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:18:32.622919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:18:32.622942 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:18:32.622963 | orchestrator | 2025-02-19 09:18:32.622976 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-02-19 09:18:32.622988 | orchestrator | Wednesday 19 February 2025 09:17:52 +0000 (0:00:00.496) 0:00:37.413 **** 2025-02-19 09:18:32.623014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:18:32.623028 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:18:32.623040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:18:32.623053 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:18:32.623066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:18:32.623085 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:18:32.623097 | orchestrator | 2025-02-19 09:18:32.623110 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-02-19 09:18:32.623122 | orchestrator | Wednesday 19 February 2025 09:17:54 +0000 (0:00:01.211) 0:00:38.625 **** 2025-02-19 09:18:32.623140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.623162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.623176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.623188 | orchestrator | 2025-02-19 09:18:32.623201 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-02-19 09:18:32.623219 | orchestrator | Wednesday 19 February 2025 09:17:55 +0000 (0:00:01.257) 0:00:39.882 **** 2025-02-19 09:18:32.623237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.623258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.623305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.623328 | orchestrator | 2025-02-19 09:18:32.623377 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-02-19 09:18:32.623400 | orchestrator | Wednesday 19 February 2025 09:17:57 +0000 (0:00:01.924) 0:00:41.807 **** 2025-02-19 09:18:32.623423 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-02-19 09:18:32.623446 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-02-19 09:18:32.623467 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-02-19 09:18:32.623481 | orchestrator | 2025-02-19 09:18:32.623494 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-02-19 09:18:32.623506 | orchestrator | Wednesday 19 February 2025 09:17:58 +0000 (0:00:01.651) 0:00:43.458 **** 2025-02-19 09:18:32.623519 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:18:32.623531 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:18:32.623543 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:18:32.623556 | orchestrator | 2025-02-19 09:18:32.623568 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-02-19 09:18:32.623588 | orchestrator | Wednesday 19 February 2025 09:18:00 +0000 (0:00:01.746) 0:00:45.205 **** 2025-02-19 09:18:32.623601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:18:32.623614 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:18:32.623627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:18:32.623640 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:18:32.623661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-19 09:18:32.623674 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:18:32.623687 | orchestrator | 2025-02-19 09:18:32.623699 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-02-19 09:18:32.623711 | orchestrator | Wednesday 19 February 2025 09:18:01 +0000 (0:00:00.559) 0:00:45.765 **** 2025-02-19 09:18:32.623743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.623764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.623777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-19 09:18:32.623790 | orchestrator | 2025-02-19 09:18:32.623802 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-02-19 09:18:32.623814 | orchestrator | Wednesday 19 February 2025 09:18:02 +0000 (0:00:01.495) 0:00:47.260 **** 2025-02-19 09:18:32.623834 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:18:32.623854 | orchestrator | 2025-02-19 09:18:32.623874 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-02-19 09:18:32.623894 | orchestrator | Wednesday 19 February 2025 09:18:05 +0000 (0:00:02.303) 0:00:49.564 **** 2025-02-19 09:18:32.623914 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:18:32.623933 | orchestrator | 2025-02-19 09:18:32.623955 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-02-19 09:18:32.623975 | orchestrator | Wednesday 19 February 2025 09:18:08 +0000 (0:00:03.874) 0:00:53.438 **** 2025-02-19 09:18:32.623997 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:18:32.624017 | orchestrator | 2025-02-19 09:18:32.624039 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-02-19 09:18:32.624059 | orchestrator | Wednesday 19 February 2025 09:18:24 +0000 (0:00:15.190) 0:01:08.629 **** 2025-02-19 09:18:32.624079 | orchestrator | 2025-02-19 09:18:32.624098 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-02-19 09:18:32.624117 | orchestrator | Wednesday 19 February 2025 09:18:24 +0000 (0:00:00.089) 0:01:08.718 **** 2025-02-19 09:18:32.624139 | orchestrator | 2025-02-19 09:18:32.624169 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-02-19 09:18:35.664011 | orchestrator | Wednesday 19 February 2025 09:18:24 +0000 (0:00:00.070) 0:01:08.788 **** 2025-02-19 09:18:35.664111 | orchestrator | 2025-02-19 09:18:35.664123 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-02-19 09:18:35.664132 | orchestrator | Wednesday 19 February 2025 09:18:24 +0000 (0:00:00.310) 0:01:09.099 **** 2025-02-19 09:18:35.664139 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:18:35.664165 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:18:35.664173 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:18:35.664180 | orchestrator | 2025-02-19 09:18:35.664187 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:18:35.664205 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 09:18:35.664214 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 09:18:35.664221 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 09:18:35.664228 | orchestrator | 2025-02-19 09:18:35.664235 | orchestrator | 2025-02-19 09:18:35.664242 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:18:35.664251 | orchestrator | Wednesday 19 February 2025 09:18:31 +0000 (0:00:07.029) 0:01:16.128 **** 2025-02-19 09:18:35.664259 | orchestrator | =============================================================================== 2025-02-19 09:18:35.664265 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.19s 2025-02-19 09:18:35.664272 | orchestrator | placement : Restart placement-api container ----------------------------- 7.03s 2025-02-19 09:18:35.664279 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.82s 2025-02-19 09:18:35.664286 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 6.59s 2025-02-19 09:18:35.664293 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.79s 2025-02-19 09:18:35.664300 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.53s 2025-02-19 09:18:35.664307 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.23s 2025-02-19 09:18:35.664313 | orchestrator | placement : Creating placement databases user and setting permissions --- 3.87s 2025-02-19 09:18:35.664320 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.58s 2025-02-19 09:18:35.664327 | orchestrator | placement : Creating placement databases -------------------------------- 2.30s 2025-02-19 09:18:35.664334 | orchestrator | placement : Copying over placement.conf --------------------------------- 1.92s 2025-02-19 09:18:35.664377 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.75s 2025-02-19 09:18:35.664385 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.65s 2025-02-19 09:18:35.664392 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.55s 2025-02-19 09:18:35.664399 | orchestrator | placement : Check placement containers ---------------------------------- 1.50s 2025-02-19 09:18:35.664406 | orchestrator | placement : Copying over config.json files for services ----------------- 1.26s 2025-02-19 09:18:35.664412 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.21s 2025-02-19 09:18:35.664419 | orchestrator | placement : include_tasks ----------------------------------------------- 1.02s 2025-02-19 09:18:35.664426 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.01s 2025-02-19 09:18:35.664433 | orchestrator | placement : include_tasks ----------------------------------------------- 0.65s 2025-02-19 09:18:35.664441 | orchestrator | 2025-02-19 09:18:32 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:35.664448 | orchestrator | 2025-02-19 09:18:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:35.664467 | orchestrator | 2025-02-19 09:18:35 | INFO  | Task b20c3e7c-5b11-4baf-97fc-c0d6ccd412dc is in state STARTED 2025-02-19 09:18:35.666075 | orchestrator | 2025-02-19 09:18:35 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:35.669916 | orchestrator | 2025-02-19 09:18:35 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:38.716762 | orchestrator | 2025-02-19 09:18:35 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:38.716864 | orchestrator | 2025-02-19 09:18:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:38.716891 | orchestrator | 2025-02-19 09:18:38 | INFO  | Task b20c3e7c-5b11-4baf-97fc-c0d6ccd412dc is in state SUCCESS 2025-02-19 09:18:38.719303 | orchestrator | 2025-02-19 09:18:38 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:38.722837 | orchestrator | 2025-02-19 09:18:38 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:38.726692 | orchestrator | 2025-02-19 09:18:38 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:41.775863 | orchestrator | 2025-02-19 09:18:38 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:41.776046 | orchestrator | 2025-02-19 09:18:41 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:41.780451 | orchestrator | 2025-02-19 09:18:41 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:41.780526 | orchestrator | 2025-02-19 09:18:41 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:18:41.781729 | orchestrator | 2025-02-19 09:18:41 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:44.827029 | orchestrator | 2025-02-19 09:18:41 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:44.827188 | orchestrator | 2025-02-19 09:18:44 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:44.827844 | orchestrator | 2025-02-19 09:18:44 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:44.827885 | orchestrator | 2025-02-19 09:18:44 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:18:44.828758 | orchestrator | 2025-02-19 09:18:44 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:47.868731 | orchestrator | 2025-02-19 09:18:44 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:47.868897 | orchestrator | 2025-02-19 09:18:47 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:47.869494 | orchestrator | 2025-02-19 09:18:47 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:47.869535 | orchestrator | 2025-02-19 09:18:47 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:18:47.870220 | orchestrator | 2025-02-19 09:18:47 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:50.901396 | orchestrator | 2025-02-19 09:18:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:50.901493 | orchestrator | 2025-02-19 09:18:50 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:50.902229 | orchestrator | 2025-02-19 09:18:50 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:50.902650 | orchestrator | 2025-02-19 09:18:50 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:18:50.903704 | orchestrator | 2025-02-19 09:18:50 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:53.943829 | orchestrator | 2025-02-19 09:18:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:53.943991 | orchestrator | 2025-02-19 09:18:53 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:53.944791 | orchestrator | 2025-02-19 09:18:53 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:53.944885 | orchestrator | 2025-02-19 09:18:53 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:18:53.945550 | orchestrator | 2025-02-19 09:18:53 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:18:53.945714 | orchestrator | 2025-02-19 09:18:53 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:18:56.983179 | orchestrator | 2025-02-19 09:18:56 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:18:56.984000 | orchestrator | 2025-02-19 09:18:56 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:18:56.985468 | orchestrator | 2025-02-19 09:18:56 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:18:56.986993 | orchestrator | 2025-02-19 09:18:56 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:19:00.044901 | orchestrator | 2025-02-19 09:18:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:00.045056 | orchestrator | 2025-02-19 09:19:00 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:00.047889 | orchestrator | 2025-02-19 09:19:00 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:00.047953 | orchestrator | 2025-02-19 09:19:00 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:00.047982 | orchestrator | 2025-02-19 09:19:00 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:19:03.091908 | orchestrator | 2025-02-19 09:19:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:03.092010 | orchestrator | 2025-02-19 09:19:03 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:03.096862 | orchestrator | 2025-02-19 09:19:03 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:03.101185 | orchestrator | 2025-02-19 09:19:03 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:03.102711 | orchestrator | 2025-02-19 09:19:03 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:19:06.139308 | orchestrator | 2025-02-19 09:19:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:06.139498 | orchestrator | 2025-02-19 09:19:06 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:06.140539 | orchestrator | 2025-02-19 09:19:06 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:06.142269 | orchestrator | 2025-02-19 09:19:06 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:06.143218 | orchestrator | 2025-02-19 09:19:06 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:19:09.194580 | orchestrator | 2025-02-19 09:19:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:09.194779 | orchestrator | 2025-02-19 09:19:09 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:09.195104 | orchestrator | 2025-02-19 09:19:09 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:09.195184 | orchestrator | 2025-02-19 09:19:09 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:09.195200 | orchestrator | 2025-02-19 09:19:09 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:19:12.229565 | orchestrator | 2025-02-19 09:19:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:12.229768 | orchestrator | 2025-02-19 09:19:12 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:12.230520 | orchestrator | 2025-02-19 09:19:12 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:12.230582 | orchestrator | 2025-02-19 09:19:12 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:12.231540 | orchestrator | 2025-02-19 09:19:12 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:19:15.283520 | orchestrator | 2025-02-19 09:19:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:15.283662 | orchestrator | 2025-02-19 09:19:15 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:15.284293 | orchestrator | 2025-02-19 09:19:15 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:15.285210 | orchestrator | 2025-02-19 09:19:15 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:15.286660 | orchestrator | 2025-02-19 09:19:15 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:19:18.316516 | orchestrator | 2025-02-19 09:19:15 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:18.316662 | orchestrator | 2025-02-19 09:19:18 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:18.317386 | orchestrator | 2025-02-19 09:19:18 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:18.319416 | orchestrator | 2025-02-19 09:19:18 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:18.321047 | orchestrator | 2025-02-19 09:19:18 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:19:21.349398 | orchestrator | 2025-02-19 09:19:18 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:21.349536 | orchestrator | 2025-02-19 09:19:21 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:21.350800 | orchestrator | 2025-02-19 09:19:21 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:21.352720 | orchestrator | 2025-02-19 09:19:21 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:21.354110 | orchestrator | 2025-02-19 09:19:21 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state STARTED 2025-02-19 09:19:24.396281 | orchestrator | 2025-02-19 09:19:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:24.396469 | orchestrator | 2025-02-19 09:19:24 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:24.397045 | orchestrator | 2025-02-19 09:19:24 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:24.397901 | orchestrator | 2025-02-19 09:19:24 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:24.399718 | orchestrator | 2025-02-19 09:19:24 | INFO  | Task 05bfe298-e580-40d5-bbcf-d6aece0949d0 is in state SUCCESS 2025-02-19 09:19:24.401427 | orchestrator | 2025-02-19 09:19:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:24.401483 | orchestrator | 2025-02-19 09:19:24.401814 | orchestrator | 2025-02-19 09:19:24.401844 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:19:24.401858 | orchestrator | 2025-02-19 09:19:24.401870 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:19:24.401883 | orchestrator | Wednesday 19 February 2025 09:18:35 +0000 (0:00:00.322) 0:00:00.322 **** 2025-02-19 09:19:24.401921 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:19:24.401936 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:19:24.401948 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:19:24.401960 | orchestrator | 2025-02-19 09:19:24.401973 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:19:24.401985 | orchestrator | Wednesday 19 February 2025 09:18:36 +0000 (0:00:00.473) 0:00:00.796 **** 2025-02-19 09:19:24.401997 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-02-19 09:19:24.402010 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-02-19 09:19:24.402072 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-02-19 09:19:24.402085 | orchestrator | 2025-02-19 09:19:24.402097 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-02-19 09:19:24.402116 | orchestrator | 2025-02-19 09:19:24.402137 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-02-19 09:19:24.402157 | orchestrator | Wednesday 19 February 2025 09:18:36 +0000 (0:00:00.644) 0:00:01.440 **** 2025-02-19 09:19:24.402178 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:19:24.402200 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:19:24.402221 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:19:24.402240 | orchestrator | 2025-02-19 09:19:24.402252 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:19:24.402266 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:19:24.402280 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:19:24.402295 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:19:24.402316 | orchestrator | 2025-02-19 09:19:24.402337 | orchestrator | 2025-02-19 09:19:24.402389 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:19:24.402433 | orchestrator | Wednesday 19 February 2025 09:18:37 +0000 (0:00:00.957) 0:00:02.397 **** 2025-02-19 09:19:24.402450 | orchestrator | =============================================================================== 2025-02-19 09:19:24.402464 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.96s 2025-02-19 09:19:24.402478 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2025-02-19 09:19:24.402492 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-02-19 09:19:24.402506 | orchestrator | 2025-02-19 09:19:24.402521 | orchestrator | 2025-02-19 09:19:24.402535 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:19:24.402549 | orchestrator | 2025-02-19 09:19:24.402562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:19:24.402577 | orchestrator | Wednesday 19 February 2025 09:17:22 +0000 (0:00:00.382) 0:00:00.382 **** 2025-02-19 09:19:24.402591 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:19:24.402611 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:19:24.402625 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:19:24.402639 | orchestrator | 2025-02-19 09:19:24.402658 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:19:24.402672 | orchestrator | Wednesday 19 February 2025 09:17:23 +0000 (0:00:00.524) 0:00:00.907 **** 2025-02-19 09:19:24.402686 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-02-19 09:19:24.402700 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-02-19 09:19:24.402714 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-02-19 09:19:24.402728 | orchestrator | 2025-02-19 09:19:24.402742 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-02-19 09:19:24.402756 | orchestrator | 2025-02-19 09:19:24.402770 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-02-19 09:19:24.402784 | orchestrator | Wednesday 19 February 2025 09:17:23 +0000 (0:00:00.428) 0:00:01.335 **** 2025-02-19 09:19:24.402817 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:19:24.402830 | orchestrator | 2025-02-19 09:19:24.402843 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-02-19 09:19:24.402855 | orchestrator | Wednesday 19 February 2025 09:17:24 +0000 (0:00:00.921) 0:00:02.257 **** 2025-02-19 09:19:24.402868 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-02-19 09:19:24.402881 | orchestrator | 2025-02-19 09:19:24.402893 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-02-19 09:19:24.402905 | orchestrator | Wednesday 19 February 2025 09:17:28 +0000 (0:00:03.738) 0:00:05.995 **** 2025-02-19 09:19:24.402918 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-02-19 09:19:24.402930 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-02-19 09:19:24.402943 | orchestrator | 2025-02-19 09:19:24.402955 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-02-19 09:19:24.402978 | orchestrator | Wednesday 19 February 2025 09:17:34 +0000 (0:00:06.580) 0:00:12.575 **** 2025-02-19 09:19:24.403000 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-19 09:19:24.403021 | orchestrator | 2025-02-19 09:19:24.403040 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-02-19 09:19:24.403061 | orchestrator | Wednesday 19 February 2025 09:17:39 +0000 (0:00:04.251) 0:00:16.827 **** 2025-02-19 09:19:24.403089 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-19 09:19:24.403102 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-02-19 09:19:24.403115 | orchestrator | 2025-02-19 09:19:24.403127 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-02-19 09:19:24.403139 | orchestrator | Wednesday 19 February 2025 09:17:45 +0000 (0:00:05.973) 0:00:22.801 **** 2025-02-19 09:19:24.403152 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-19 09:19:24.403165 | orchestrator | 2025-02-19 09:19:24.403177 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-02-19 09:19:24.403189 | orchestrator | Wednesday 19 February 2025 09:17:49 +0000 (0:00:04.310) 0:00:27.111 **** 2025-02-19 09:19:24.403201 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-02-19 09:19:24.403214 | orchestrator | 2025-02-19 09:19:24.403226 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-02-19 09:19:24.403240 | orchestrator | Wednesday 19 February 2025 09:17:53 +0000 (0:00:04.451) 0:00:31.562 **** 2025-02-19 09:19:24.403261 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:19:24.403282 | orchestrator | 2025-02-19 09:19:24.403300 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-02-19 09:19:24.403320 | orchestrator | Wednesday 19 February 2025 09:17:56 +0000 (0:00:02.872) 0:00:34.434 **** 2025-02-19 09:19:24.403340 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:19:24.403383 | orchestrator | 2025-02-19 09:19:24.403403 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-02-19 09:19:24.403423 | orchestrator | Wednesday 19 February 2025 09:18:00 +0000 (0:00:03.575) 0:00:38.010 **** 2025-02-19 09:19:24.403445 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:19:24.403465 | orchestrator | 2025-02-19 09:19:24.403484 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-02-19 09:19:24.403503 | orchestrator | Wednesday 19 February 2025 09:18:03 +0000 (0:00:03.453) 0:00:41.464 **** 2025-02-19 09:19:24.403537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.403615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.403644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.403683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.403707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.403728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.403760 | orchestrator | 2025-02-19 09:19:24.403781 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-02-19 09:19:24.403803 | orchestrator | Wednesday 19 February 2025 09:18:05 +0000 (0:00:01.832) 0:00:43.296 **** 2025-02-19 09:19:24.403822 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:19:24.403842 | orchestrator | 2025-02-19 09:19:24.403863 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-02-19 09:19:24.403883 | orchestrator | Wednesday 19 February 2025 09:18:05 +0000 (0:00:00.123) 0:00:43.420 **** 2025-02-19 09:19:24.403904 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:19:24.403924 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:19:24.403943 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:19:24.403965 | orchestrator | 2025-02-19 09:19:24.403986 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-02-19 09:19:24.404006 | orchestrator | Wednesday 19 February 2025 09:18:06 +0000 (0:00:00.494) 0:00:43.914 **** 2025-02-19 09:19:24.404026 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:19:24.404045 | orchestrator | 2025-02-19 09:19:24.404065 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-02-19 09:19:24.404085 | orchestrator | Wednesday 19 February 2025 09:18:07 +0000 (0:00:00.857) 0:00:44.771 **** 2025-02-19 09:19:24.404108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.404169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.404196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.404231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.404253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.404275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.404295 | orchestrator | 2025-02-19 09:19:24.404315 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-02-19 09:19:24.404336 | orchestrator | Wednesday 19 February 2025 09:18:10 +0000 (0:00:03.590) 0:00:48.362 **** 2025-02-19 09:19:24.404390 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:19:24.404406 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:19:24.404419 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:19:24.404431 | orchestrator | 2025-02-19 09:19:24.404444 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-02-19 09:19:24.404465 | orchestrator | Wednesday 19 February 2025 09:18:11 +0000 (0:00:00.359) 0:00:48.722 **** 2025-02-19 09:19:24.404479 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:19:24.404491 | orchestrator | 2025-02-19 09:19:24.404504 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-02-19 09:19:24.404516 | orchestrator | Wednesday 19 February 2025 09:18:12 +0000 (0:00:01.141) 0:00:49.863 **** 2025-02-19 09:19:24.404547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.404572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.404585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.404599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.404632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.404653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.404674 | orchestrator | 2025-02-19 09:19:24.404693 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-02-19 09:19:24.404714 | orchestrator | Wednesday 19 February 2025 09:18:14 +0000 (0:00:02.519) 0:00:52.383 **** 2025-02-19 09:19:24.404736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:19:24.404758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:19:24.404779 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:19:24.404801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:19:24.404849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:19:24.404883 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:19:24.404904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:19:24.404926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:19:24.404948 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:19:24.404970 | orchestrator | 2025-02-19 09:19:24.404990 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-02-19 09:19:24.405012 | orchestrator | Wednesday 19 February 2025 09:18:15 +0000 (0:00:00.843) 0:00:53.226 **** 2025-02-19 09:19:24.405034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:19:24.405056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:19:24.405086 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:19:24.405140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:19:24.405165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:19:24.405187 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:19:24.405209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:19:24.405230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:19:24.405252 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:19:24.405272 | orchestrator | 2025-02-19 09:19:24.405294 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-02-19 09:19:24.405314 | orchestrator | Wednesday 19 February 2025 09:18:16 +0000 (0:00:00.974) 0:00:54.201 **** 2025-02-19 09:19:24.405392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.405433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.405456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.405479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.405500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.405578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.405615 | orchestrator | 2025-02-19 09:19:24.405637 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-02-19 09:19:24.405657 | orchestrator | Wednesday 19 February 2025 09:18:19 +0000 (0:00:02.489) 0:00:56.690 **** 2025-02-19 09:19:24.405677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.405698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.405718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.405755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.405801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.405825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.405847 | orchestrator | 2025-02-19 09:19:24.405867 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-02-19 09:19:24.405887 | orchestrator | Wednesday 19 February 2025 09:18:26 +0000 (0:00:07.723) 0:01:04.414 **** 2025-02-19 09:19:24.405908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:19:24.405930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:19:24.405951 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:19:24.405987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:19:24.406098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:19:24.406128 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:19:24.406150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-19 09:19:24.406170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:19:24.406190 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:19:24.406211 | orchestrator | 2025-02-19 09:19:24.406233 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-02-19 09:19:24.406255 | orchestrator | Wednesday 19 February 2025 09:18:27 +0000 (0:00:01.167) 0:01:05.581 **** 2025-02-19 09:19:24.406277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.406332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.406411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-19 09:19:24.406435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.406481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.406521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:19:24.406554 | orchestrator | 2025-02-19 09:19:24.406575 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-02-19 09:19:24.406597 | orchestrator | Wednesday 19 February 2025 09:18:31 +0000 (0:00:03.238) 0:01:08.820 **** 2025-02-19 09:19:24.406617 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:19:24.406638 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:19:24.406658 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:19:24.406677 | orchestrator | 2025-02-19 09:19:24.406698 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-02-19 09:19:24.406717 | orchestrator | Wednesday 19 February 2025 09:18:31 +0000 (0:00:00.388) 0:01:09.208 **** 2025-02-19 09:19:24.406737 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:19:24.406756 | orchestrator | 2025-02-19 09:19:24.406784 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-02-19 09:19:24.406805 | orchestrator | Wednesday 19 February 2025 09:18:34 +0000 (0:00:02.459) 0:01:11.668 **** 2025-02-19 09:19:24.406824 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:19:24.406842 | orchestrator | 2025-02-19 09:19:24.406861 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-02-19 09:19:24.406882 | orchestrator | Wednesday 19 February 2025 09:18:37 +0000 (0:00:03.011) 0:01:14.679 **** 2025-02-19 09:19:24.406902 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:19:24.406924 | orchestrator | 2025-02-19 09:19:24.406944 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-02-19 09:19:24.406966 | orchestrator | Wednesday 19 February 2025 09:18:56 +0000 (0:00:18.925) 0:01:33.605 **** 2025-02-19 09:19:24.406984 | orchestrator | 2025-02-19 09:19:24.407018 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-02-19 09:19:27.440621 | orchestrator | Wednesday 19 February 2025 09:18:56 +0000 (0:00:00.085) 0:01:33.690 **** 2025-02-19 09:19:27.441598 | orchestrator | 2025-02-19 09:19:27.441639 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-02-19 09:19:27.441656 | orchestrator | Wednesday 19 February 2025 09:18:56 +0000 (0:00:00.088) 0:01:33.779 **** 2025-02-19 09:19:27.441670 | orchestrator | 2025-02-19 09:19:27.441685 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-02-19 09:19:27.441700 | orchestrator | Wednesday 19 February 2025 09:18:56 +0000 (0:00:00.262) 0:01:34.041 **** 2025-02-19 09:19:27.441716 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:19:27.441733 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:19:27.441750 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:19:27.441765 | orchestrator | 2025-02-19 09:19:27.441781 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-02-19 09:19:27.441983 | orchestrator | Wednesday 19 February 2025 09:19:10 +0000 (0:00:14.406) 0:01:48.447 **** 2025-02-19 09:19:27.441998 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:19:27.442012 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:19:27.442079 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:19:27.442093 | orchestrator | 2025-02-19 09:19:27.442107 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:19:27.442123 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-19 09:19:27.442139 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 09:19:27.442153 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 09:19:27.442204 | orchestrator | 2025-02-19 09:19:27.442248 | orchestrator | 2025-02-19 09:19:27.442262 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:19:27.442277 | orchestrator | Wednesday 19 February 2025 09:19:22 +0000 (0:00:11.584) 0:02:00.032 **** 2025-02-19 09:19:27.442291 | orchestrator | =============================================================================== 2025-02-19 09:19:27.442305 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.93s 2025-02-19 09:19:27.442319 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.41s 2025-02-19 09:19:27.442333 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.58s 2025-02-19 09:19:27.442347 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.72s 2025-02-19 09:19:27.442391 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.58s 2025-02-19 09:19:27.442406 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 5.97s 2025-02-19 09:19:27.442420 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.45s 2025-02-19 09:19:27.442434 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.31s 2025-02-19 09:19:27.442448 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 4.25s 2025-02-19 09:19:27.442462 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.74s 2025-02-19 09:19:27.442476 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.59s 2025-02-19 09:19:27.442490 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.58s 2025-02-19 09:19:27.442504 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.45s 2025-02-19 09:19:27.442517 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.24s 2025-02-19 09:19:27.442545 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 3.01s 2025-02-19 09:19:27.442560 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.87s 2025-02-19 09:19:27.442574 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.52s 2025-02-19 09:19:27.442587 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.49s 2025-02-19 09:19:27.442601 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.46s 2025-02-19 09:19:27.442616 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.83s 2025-02-19 09:19:27.442648 | orchestrator | 2025-02-19 09:19:27 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:30.468967 | orchestrator | 2025-02-19 09:19:27 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:30.469206 | orchestrator | 2025-02-19 09:19:27 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:30.469231 | orchestrator | 2025-02-19 09:19:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:30.469265 | orchestrator | 2025-02-19 09:19:30 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:30.470504 | orchestrator | 2025-02-19 09:19:30 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:30.470567 | orchestrator | 2025-02-19 09:19:30 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:33.500674 | orchestrator | 2025-02-19 09:19:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:33.500817 | orchestrator | 2025-02-19 09:19:33 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:33.501351 | orchestrator | 2025-02-19 09:19:33 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:33.502191 | orchestrator | 2025-02-19 09:19:33 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:36.541235 | orchestrator | 2025-02-19 09:19:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:36.541530 | orchestrator | 2025-02-19 09:19:36 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:36.541913 | orchestrator | 2025-02-19 09:19:36 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:36.542696 | orchestrator | 2025-02-19 09:19:36 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:39.582141 | orchestrator | 2025-02-19 09:19:36 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:39.582341 | orchestrator | 2025-02-19 09:19:39 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:39.583961 | orchestrator | 2025-02-19 09:19:39 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:39.585380 | orchestrator | 2025-02-19 09:19:39 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:42.618586 | orchestrator | 2025-02-19 09:19:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:42.618730 | orchestrator | 2025-02-19 09:19:42 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:42.619250 | orchestrator | 2025-02-19 09:19:42 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:42.619286 | orchestrator | 2025-02-19 09:19:42 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:45.660280 | orchestrator | 2025-02-19 09:19:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:45.660628 | orchestrator | 2025-02-19 09:19:45 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:45.662080 | orchestrator | 2025-02-19 09:19:45 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:45.662151 | orchestrator | 2025-02-19 09:19:45 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:48.735705 | orchestrator | 2025-02-19 09:19:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:48.735803 | orchestrator | 2025-02-19 09:19:48 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:48.736481 | orchestrator | 2025-02-19 09:19:48 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:48.738749 | orchestrator | 2025-02-19 09:19:48 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:51.779250 | orchestrator | 2025-02-19 09:19:48 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:51.779428 | orchestrator | 2025-02-19 09:19:51 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:51.780273 | orchestrator | 2025-02-19 09:19:51 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:51.782604 | orchestrator | 2025-02-19 09:19:51 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:54.833425 | orchestrator | 2025-02-19 09:19:51 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:54.833723 | orchestrator | 2025-02-19 09:19:54 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:54.834249 | orchestrator | 2025-02-19 09:19:54 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:54.834304 | orchestrator | 2025-02-19 09:19:54 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:19:57.877290 | orchestrator | 2025-02-19 09:19:54 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:19:57.877469 | orchestrator | 2025-02-19 09:19:57 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:19:57.878231 | orchestrator | 2025-02-19 09:19:57 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:19:57.879270 | orchestrator | 2025-02-19 09:19:57 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:00.918319 | orchestrator | 2025-02-19 09:19:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:00.918509 | orchestrator | 2025-02-19 09:20:00 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:00.919208 | orchestrator | 2025-02-19 09:20:00 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:20:00.920062 | orchestrator | 2025-02-19 09:20:00 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:03.951763 | orchestrator | 2025-02-19 09:20:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:03.951911 | orchestrator | 2025-02-19 09:20:03 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:06.982714 | orchestrator | 2025-02-19 09:20:03 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:20:06.982911 | orchestrator | 2025-02-19 09:20:03 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:06.982949 | orchestrator | 2025-02-19 09:20:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:06.982988 | orchestrator | 2025-02-19 09:20:06 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:06.983291 | orchestrator | 2025-02-19 09:20:06 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:20:06.984107 | orchestrator | 2025-02-19 09:20:06 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:10.019556 | orchestrator | 2025-02-19 09:20:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:10.019716 | orchestrator | 2025-02-19 09:20:10 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:10.020212 | orchestrator | 2025-02-19 09:20:10 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:20:10.021107 | orchestrator | 2025-02-19 09:20:10 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:13.069573 | orchestrator | 2025-02-19 09:20:10 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:13.069714 | orchestrator | 2025-02-19 09:20:13 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:13.072051 | orchestrator | 2025-02-19 09:20:13 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:20:13.073139 | orchestrator | 2025-02-19 09:20:13 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:16.104725 | orchestrator | 2025-02-19 09:20:13 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:16.104869 | orchestrator | 2025-02-19 09:20:16 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:16.105467 | orchestrator | 2025-02-19 09:20:16 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:20:16.106450 | orchestrator | 2025-02-19 09:20:16 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:19.146658 | orchestrator | 2025-02-19 09:20:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:19.146800 | orchestrator | 2025-02-19 09:20:19 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:19.148177 | orchestrator | 2025-02-19 09:20:19 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state STARTED 2025-02-19 09:20:19.148228 | orchestrator | 2025-02-19 09:20:19 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:22.186743 | orchestrator | 2025-02-19 09:20:19 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:22.186872 | orchestrator | 2025-02-19 09:20:22 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:22.190969 | orchestrator | 2025-02-19 09:20:22.191685 | orchestrator | 2025-02-19 09:20:22.191731 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:20:22.191759 | orchestrator | 2025-02-19 09:20:22.191786 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:20:22.191812 | orchestrator | Wednesday 19 February 2025 09:18:02 +0000 (0:00:00.388) 0:00:00.388 **** 2025-02-19 09:20:22.191838 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:20:22.191865 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:20:22.191891 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:20:22.191916 | orchestrator | 2025-02-19 09:20:22.191941 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:20:22.191967 | orchestrator | Wednesday 19 February 2025 09:18:02 +0000 (0:00:00.319) 0:00:00.707 **** 2025-02-19 09:20:22.191992 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-02-19 09:20:22.192018 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-02-19 09:20:22.192043 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-02-19 09:20:22.192068 | orchestrator | 2025-02-19 09:20:22.192093 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-02-19 09:20:22.192119 | orchestrator | 2025-02-19 09:20:22.192144 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-02-19 09:20:22.192169 | orchestrator | Wednesday 19 February 2025 09:18:03 +0000 (0:00:00.397) 0:00:01.105 **** 2025-02-19 09:20:22.192195 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:20:22.192222 | orchestrator | 2025-02-19 09:20:22.192247 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-02-19 09:20:22.192272 | orchestrator | Wednesday 19 February 2025 09:18:03 +0000 (0:00:00.624) 0:00:01.729 **** 2025-02-19 09:20:22.192298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.192330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.192426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.192453 | orchestrator | 2025-02-19 09:20:22.192477 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-02-19 09:20:22.192501 | orchestrator | Wednesday 19 February 2025 09:18:04 +0000 (0:00:01.070) 0:00:02.800 **** 2025-02-19 09:20:22.192525 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-02-19 09:20:22.192551 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-02-19 09:20:22.192575 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:20:22.192602 | orchestrator | 2025-02-19 09:20:22.192627 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-02-19 09:20:22.192652 | orchestrator | Wednesday 19 February 2025 09:18:05 +0000 (0:00:00.622) 0:00:03.422 **** 2025-02-19 09:20:22.192677 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:20:22.192701 | orchestrator | 2025-02-19 09:20:22.192725 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-02-19 09:20:22.192749 | orchestrator | Wednesday 19 February 2025 09:18:06 +0000 (0:00:00.804) 0:00:04.227 **** 2025-02-19 09:20:22.192836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.192863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.192886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.192910 | orchestrator | 2025-02-19 09:20:22.192932 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-02-19 09:20:22.192967 | orchestrator | Wednesday 19 February 2025 09:18:07 +0000 (0:00:01.804) 0:00:06.031 **** 2025-02-19 09:20:22.193042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-19 09:20:22.193068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-19 09:20:22.193091 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:20:22.193114 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:20:22.193184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-19 09:20:22.193208 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:20:22.193231 | orchestrator | 2025-02-19 09:20:22.193254 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-02-19 09:20:22.193277 | orchestrator | Wednesday 19 February 2025 09:18:09 +0000 (0:00:01.078) 0:00:07.110 **** 2025-02-19 09:20:22.193299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-19 09:20:22.193323 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:20:22.193345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-19 09:20:22.193418 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:20:22.193463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-19 09:20:22.193489 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:20:22.193514 | orchestrator | 2025-02-19 09:20:22.193538 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-02-19 09:20:22.193561 | orchestrator | Wednesday 19 February 2025 09:18:10 +0000 (0:00:01.092) 0:00:08.202 **** 2025-02-19 09:20:22.193586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.193611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.193684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.193709 | orchestrator | 2025-02-19 09:20:22.193732 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-02-19 09:20:22.193755 | orchestrator | Wednesday 19 February 2025 09:18:11 +0000 (0:00:01.719) 0:00:09.921 **** 2025-02-19 09:20:22.193778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.193812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.193850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.193874 | orchestrator | 2025-02-19 09:20:22.193898 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-02-19 09:20:22.193926 | orchestrator | Wednesday 19 February 2025 09:18:13 +0000 (0:00:01.892) 0:00:11.814 **** 2025-02-19 09:20:22.193949 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:20:22.193972 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:20:22.193994 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:20:22.194114 | orchestrator | 2025-02-19 09:20:22.194141 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-02-19 09:20:22.194163 | orchestrator | Wednesday 19 February 2025 09:18:14 +0000 (0:00:00.266) 0:00:12.081 **** 2025-02-19 09:20:22.194186 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-19 09:20:22.194209 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-19 09:20:22.194231 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-19 09:20:22.194253 | orchestrator | 2025-02-19 09:20:22.194275 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-02-19 09:20:22.194297 | orchestrator | Wednesday 19 February 2025 09:18:15 +0000 (0:00:01.294) 0:00:13.376 **** 2025-02-19 09:20:22.194320 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-19 09:20:22.194343 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-19 09:20:22.194387 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-19 09:20:22.194410 | orchestrator | 2025-02-19 09:20:22.194434 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-02-19 09:20:22.194457 | orchestrator | Wednesday 19 February 2025 09:18:16 +0000 (0:00:01.245) 0:00:14.621 **** 2025-02-19 09:20:22.194529 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:20:22.194553 | orchestrator | 2025-02-19 09:20:22.194576 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-02-19 09:20:22.194598 | orchestrator | Wednesday 19 February 2025 09:18:17 +0000 (0:00:00.453) 0:00:15.075 **** 2025-02-19 09:20:22.194621 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-02-19 09:20:22.194644 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-02-19 09:20:22.194678 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:20:22.194701 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:20:22.194723 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:20:22.194752 | orchestrator | 2025-02-19 09:20:22.194773 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-02-19 09:20:22.194795 | orchestrator | Wednesday 19 February 2025 09:18:17 +0000 (0:00:00.854) 0:00:15.930 **** 2025-02-19 09:20:22.194818 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:20:22.194841 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:20:22.194863 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:20:22.194884 | orchestrator | 2025-02-19 09:20:22.194905 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-02-19 09:20:22.194929 | orchestrator | Wednesday 19 February 2025 09:18:18 +0000 (0:00:00.532) 0:00:16.462 **** 2025-02-19 09:20:22.194953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1339366, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.614627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.194975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1339366, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.614627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.194996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1339366, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.614627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1339348, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1339348, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1339348, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1339333, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1339333, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1339333, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1339359, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1339359, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1339359, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1345906, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1345906, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1345906, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1339335, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1339335, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1339335, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1339356, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1339356, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1339356, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1345904, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1345904, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1345904, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1345806, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5786262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1345806, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5786262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1345806, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5786262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1345909, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5946264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1345909, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5946264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1345909, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5946264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1345892, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5906265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1345892, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5906265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1345892, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5906265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1339351, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.195991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1339351, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1339351, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1345910, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6116269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1345910, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6116269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1345910, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6116269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1339363, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1339363, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1339363, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.613627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1345903, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1345903, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1345903, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1339340, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1339340, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1339340, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6126268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1345807, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5906265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1345807, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5906265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1345807, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5906265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1345893, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1345893, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1345893, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.5936265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1339328, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6116269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1339328, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6116269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1339328, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6116269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1339492, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.621627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1339492, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.621627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1339492, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.621627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1339470, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1339470, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1339470, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1339382, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.614627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1339382, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.614627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1339382, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.614627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1339625, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.625627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1339625, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.625627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1339625, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.625627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1339386, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.614627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.196975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1339386, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.614627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1339386, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.614627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1339618, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.624627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1339618, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.624627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1339618, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.624627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1339635, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.625627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1339635, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.625627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1339635, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.625627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1339515, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.623627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1339515, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.623627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1339515, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.623627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1339612, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.624627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1339612, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.624627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1339612, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.624627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1339390, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6156268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1339390, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6156268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1339390, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6156268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1339476, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1339476, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1339476, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1339650, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.626627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1339650, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.626627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1339650, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.626627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1339622, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.624627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1339622, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.624627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1339622, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.624627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1339408, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.616627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1339408, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.616627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1339408, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.616627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1339400, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6156268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1339400, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6156268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1339400, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6156268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1339424, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.616627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1339424, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.616627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1339437, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1339424, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.616627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1339437, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.197987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1339483, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1339437, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1339483, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1339483, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1339551, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.623627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1339551, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.623627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1339551, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.623627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1339489, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1339489, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1339489, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6186268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1339658, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1339658, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1339658, 'dev': 163, 'nlink': 1, 'atime': 1739923377.0, 'mtime': 1739923377.0, 'ctime': 1739953115.6276271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-19 09:20:22.198323 | orchestrator | 2025-02-19 09:20:22.198340 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-02-19 09:20:22.198357 | orchestrator | Wednesday 19 February 2025 09:19:00 +0000 (0:00:41.878) 0:00:58.341 **** 2025-02-19 09:20:22.198397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.198415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.198431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-19 09:20:22.198447 | orchestrator | 2025-02-19 09:20:22.198464 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-02-19 09:20:22.198480 | orchestrator | Wednesday 19 February 2025 09:19:02 +0000 (0:00:02.332) 0:01:00.674 **** 2025-02-19 09:20:22.198496 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:20:22.198512 | orchestrator | 2025-02-19 09:20:22.198528 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-02-19 09:20:22.198543 | orchestrator | Wednesday 19 February 2025 09:19:05 +0000 (0:00:03.283) 0:01:03.957 **** 2025-02-19 09:20:22.198559 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:20:22.198575 | orchestrator | 2025-02-19 09:20:22.198590 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-02-19 09:20:22.198606 | orchestrator | Wednesday 19 February 2025 09:19:08 +0000 (0:00:02.699) 0:01:06.656 **** 2025-02-19 09:20:22.198622 | orchestrator | 2025-02-19 09:20:22.198638 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-02-19 09:20:22.198660 | orchestrator | Wednesday 19 February 2025 09:19:08 +0000 (0:00:00.071) 0:01:06.728 **** 2025-02-19 09:20:22.198684 | orchestrator | 2025-02-19 09:20:22.198700 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-02-19 09:20:22.198716 | orchestrator | Wednesday 19 February 2025 09:19:08 +0000 (0:00:00.064) 0:01:06.793 **** 2025-02-19 09:20:22.198732 | orchestrator | 2025-02-19 09:20:22.198748 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-02-19 09:20:22.198764 | orchestrator | Wednesday 19 February 2025 09:19:08 +0000 (0:00:00.077) 0:01:06.870 **** 2025-02-19 09:20:22.198779 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:20:22.198794 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:20:22.198809 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:20:22.198826 | orchestrator | 2025-02-19 09:20:22.198842 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-02-19 09:20:22.198858 | orchestrator | Wednesday 19 February 2025 09:19:12 +0000 (0:00:03.971) 0:01:10.842 **** 2025-02-19 09:20:22.198873 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:20:22.198888 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:20:22.198904 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-02-19 09:20:22.198921 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-02-19 09:20:22.198937 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-02-19 09:20:22.198952 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:20:22.198968 | orchestrator | 2025-02-19 09:20:22.198984 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-02-19 09:20:22.199001 | orchestrator | Wednesday 19 February 2025 09:19:52 +0000 (0:00:40.043) 0:01:50.885 **** 2025-02-19 09:20:22.199017 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:20:22.199033 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:20:22.199050 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:20:22.199066 | orchestrator | 2025-02-19 09:20:22.199082 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-02-19 09:20:22.199099 | orchestrator | Wednesday 19 February 2025 09:20:13 +0000 (0:00:20.242) 0:02:11.128 **** 2025-02-19 09:20:22.199115 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:20:22.199132 | orchestrator | 2025-02-19 09:20:22.199149 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-02-19 09:20:22.199166 | orchestrator | Wednesday 19 February 2025 09:20:15 +0000 (0:00:02.753) 0:02:13.881 **** 2025-02-19 09:20:22.199182 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:20:22.199199 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:20:22.199215 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:20:22.199232 | orchestrator | 2025-02-19 09:20:22.199249 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-02-19 09:20:22.199265 | orchestrator | Wednesday 19 February 2025 09:20:16 +0000 (0:00:00.550) 0:02:14.432 **** 2025-02-19 09:20:22.199282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-02-19 09:20:22.199301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-02-19 09:20:22.199318 | orchestrator | 2025-02-19 09:20:22.199335 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-02-19 09:20:22.199351 | orchestrator | Wednesday 19 February 2025 09:20:19 +0000 (0:00:02.877) 0:02:17.309 **** 2025-02-19 09:20:22.199387 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:20:22.199405 | orchestrator | 2025-02-19 09:20:22.199422 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:20:22.199447 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-19 09:20:22.199464 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-19 09:20:22.199480 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-19 09:20:22.199497 | orchestrator | 2025-02-19 09:20:22.199513 | orchestrator | 2025-02-19 09:20:22.199530 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:20:22.199546 | orchestrator | Wednesday 19 February 2025 09:20:19 +0000 (0:00:00.411) 0:02:17.721 **** 2025-02-19 09:20:22.199563 | orchestrator | =============================================================================== 2025-02-19 09:20:22.199579 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 41.88s 2025-02-19 09:20:22.199595 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 40.04s 2025-02-19 09:20:22.199611 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 20.24s 2025-02-19 09:20:22.199627 | orchestrator | grafana : Restart first grafana container ------------------------------- 3.97s 2025-02-19 09:20:22.199642 | orchestrator | grafana : Creating grafana database ------------------------------------- 3.28s 2025-02-19 09:20:22.199664 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.88s 2025-02-19 09:20:25.232021 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.75s 2025-02-19 09:20:25.233061 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.70s 2025-02-19 09:20:25.233110 | orchestrator | grafana : Check grafana containers -------------------------------------- 2.33s 2025-02-19 09:20:25.233125 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.89s 2025-02-19 09:20:25.233139 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.80s 2025-02-19 09:20:25.233153 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.72s 2025-02-19 09:20:25.233167 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.29s 2025-02-19 09:20:25.233181 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.25s 2025-02-19 09:20:25.233194 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.09s 2025-02-19 09:20:25.233208 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 1.08s 2025-02-19 09:20:25.233223 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.07s 2025-02-19 09:20:25.233237 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.85s 2025-02-19 09:20:25.233250 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.80s 2025-02-19 09:20:25.233283 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.62s 2025-02-19 09:20:25.233298 | orchestrator | 2025-02-19 09:20:22 | INFO  | Task 5f08661a-e0c3-4d1d-80ea-e5b7b47f3659 is in state SUCCESS 2025-02-19 09:20:25.233312 | orchestrator | 2025-02-19 09:20:22 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:25.233327 | orchestrator | 2025-02-19 09:20:22 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:25.233396 | orchestrator | 2025-02-19 09:20:25 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:28.274703 | orchestrator | 2025-02-19 09:20:25 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:28.274836 | orchestrator | 2025-02-19 09:20:25 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:28.274922 | orchestrator | 2025-02-19 09:20:28 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:28.276230 | orchestrator | 2025-02-19 09:20:28 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:31.324511 | orchestrator | 2025-02-19 09:20:28 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:31.324670 | orchestrator | 2025-02-19 09:20:31 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:31.325498 | orchestrator | 2025-02-19 09:20:31 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:34.374827 | orchestrator | 2025-02-19 09:20:31 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:34.375008 | orchestrator | 2025-02-19 09:20:34 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:34.375539 | orchestrator | 2025-02-19 09:20:34 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:37.421601 | orchestrator | 2025-02-19 09:20:34 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:37.421777 | orchestrator | 2025-02-19 09:20:37 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:37.422532 | orchestrator | 2025-02-19 09:20:37 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:40.463233 | orchestrator | 2025-02-19 09:20:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:40.463361 | orchestrator | 2025-02-19 09:20:40 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:43.491821 | orchestrator | 2025-02-19 09:20:40 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:43.491944 | orchestrator | 2025-02-19 09:20:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:43.491981 | orchestrator | 2025-02-19 09:20:43 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:43.492715 | orchestrator | 2025-02-19 09:20:43 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:46.528779 | orchestrator | 2025-02-19 09:20:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:46.528922 | orchestrator | 2025-02-19 09:20:46 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:46.529783 | orchestrator | 2025-02-19 09:20:46 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:49.576140 | orchestrator | 2025-02-19 09:20:46 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:49.576272 | orchestrator | 2025-02-19 09:20:49 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:49.577250 | orchestrator | 2025-02-19 09:20:49 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:52.613111 | orchestrator | 2025-02-19 09:20:49 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:52.613247 | orchestrator | 2025-02-19 09:20:52 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:52.614438 | orchestrator | 2025-02-19 09:20:52 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:55.650568 | orchestrator | 2025-02-19 09:20:52 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:55.650756 | orchestrator | 2025-02-19 09:20:55 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:20:55.656120 | orchestrator | 2025-02-19 09:20:55 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:20:58.689670 | orchestrator | 2025-02-19 09:20:55 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:20:58.689826 | orchestrator | 2025-02-19 09:20:58 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:01.732835 | orchestrator | 2025-02-19 09:20:58 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:01.732919 | orchestrator | 2025-02-19 09:20:58 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:01.732940 | orchestrator | 2025-02-19 09:21:01 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:01.734674 | orchestrator | 2025-02-19 09:21:01 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:04.807294 | orchestrator | 2025-02-19 09:21:01 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:04.807521 | orchestrator | 2025-02-19 09:21:04 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:04.807983 | orchestrator | 2025-02-19 09:21:04 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:07.852960 | orchestrator | 2025-02-19 09:21:04 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:07.853125 | orchestrator | 2025-02-19 09:21:07 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:07.854120 | orchestrator | 2025-02-19 09:21:07 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:10.901175 | orchestrator | 2025-02-19 09:21:07 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:10.901313 | orchestrator | 2025-02-19 09:21:10 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:13.935673 | orchestrator | 2025-02-19 09:21:10 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:13.935770 | orchestrator | 2025-02-19 09:21:10 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:13.935793 | orchestrator | 2025-02-19 09:21:13 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:13.936410 | orchestrator | 2025-02-19 09:21:13 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:16.971888 | orchestrator | 2025-02-19 09:21:13 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:16.972025 | orchestrator | 2025-02-19 09:21:16 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:16.972444 | orchestrator | 2025-02-19 09:21:16 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:20.009969 | orchestrator | 2025-02-19 09:21:16 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:20.010173 | orchestrator | 2025-02-19 09:21:20 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:23.053673 | orchestrator | 2025-02-19 09:21:20 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:23.053779 | orchestrator | 2025-02-19 09:21:20 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:23.053808 | orchestrator | 2025-02-19 09:21:23 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:23.056863 | orchestrator | 2025-02-19 09:21:23 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:26.095808 | orchestrator | 2025-02-19 09:21:23 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:26.095961 | orchestrator | 2025-02-19 09:21:26 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:26.096768 | orchestrator | 2025-02-19 09:21:26 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:29.154718 | orchestrator | 2025-02-19 09:21:26 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:29.154857 | orchestrator | 2025-02-19 09:21:29 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:32.199370 | orchestrator | 2025-02-19 09:21:29 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:32.199581 | orchestrator | 2025-02-19 09:21:29 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:32.199631 | orchestrator | 2025-02-19 09:21:32 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:32.201944 | orchestrator | 2025-02-19 09:21:32 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:35.242667 | orchestrator | 2025-02-19 09:21:32 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:35.242800 | orchestrator | 2025-02-19 09:21:35 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:35.243120 | orchestrator | 2025-02-19 09:21:35 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:38.280696 | orchestrator | 2025-02-19 09:21:35 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:38.280798 | orchestrator | 2025-02-19 09:21:38 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:38.280980 | orchestrator | 2025-02-19 09:21:38 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:41.335905 | orchestrator | 2025-02-19 09:21:38 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:41.336049 | orchestrator | 2025-02-19 09:21:41 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:41.337483 | orchestrator | 2025-02-19 09:21:41 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:44.385598 | orchestrator | 2025-02-19 09:21:41 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:44.385717 | orchestrator | 2025-02-19 09:21:44 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:44.386721 | orchestrator | 2025-02-19 09:21:44 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:44.386955 | orchestrator | 2025-02-19 09:21:44 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:47.423277 | orchestrator | 2025-02-19 09:21:47 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:47.423616 | orchestrator | 2025-02-19 09:21:47 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:50.468895 | orchestrator | 2025-02-19 09:21:47 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:50.469054 | orchestrator | 2025-02-19 09:21:50 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:53.533772 | orchestrator | 2025-02-19 09:21:50 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:53.533919 | orchestrator | 2025-02-19 09:21:50 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:53.533959 | orchestrator | 2025-02-19 09:21:53 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:53.536727 | orchestrator | 2025-02-19 09:21:53 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:56.580403 | orchestrator | 2025-02-19 09:21:53 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:56.580547 | orchestrator | 2025-02-19 09:21:56 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:56.581076 | orchestrator | 2025-02-19 09:21:56 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:21:59.651661 | orchestrator | 2025-02-19 09:21:56 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:21:59.651834 | orchestrator | 2025-02-19 09:21:59 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:21:59.653636 | orchestrator | 2025-02-19 09:21:59 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:02.713681 | orchestrator | 2025-02-19 09:21:59 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:02.713831 | orchestrator | 2025-02-19 09:22:02 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:22:05.758755 | orchestrator | 2025-02-19 09:22:02 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:05.758874 | orchestrator | 2025-02-19 09:22:02 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:05.758914 | orchestrator | 2025-02-19 09:22:05 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:22:08.810775 | orchestrator | 2025-02-19 09:22:05 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:08.810875 | orchestrator | 2025-02-19 09:22:05 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:08.810902 | orchestrator | 2025-02-19 09:22:08 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:22:08.811856 | orchestrator | 2025-02-19 09:22:08 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:11.858517 | orchestrator | 2025-02-19 09:22:08 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:11.858656 | orchestrator | 2025-02-19 09:22:11 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:22:14.910747 | orchestrator | 2025-02-19 09:22:11 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:14.910883 | orchestrator | 2025-02-19 09:22:11 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:14.910937 | orchestrator | 2025-02-19 09:22:14 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:22:17.962667 | orchestrator | 2025-02-19 09:22:14 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:17.962788 | orchestrator | 2025-02-19 09:22:14 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:17.962826 | orchestrator | 2025-02-19 09:22:17 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:22:21.011647 | orchestrator | 2025-02-19 09:22:17 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:21.011764 | orchestrator | 2025-02-19 09:22:17 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:21.011800 | orchestrator | 2025-02-19 09:22:21 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:22:21.014548 | orchestrator | 2025-02-19 09:22:21 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:24.094190 | orchestrator | 2025-02-19 09:22:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:24.094350 | orchestrator | 2025-02-19 09:22:24 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:22:24.096223 | orchestrator | 2025-02-19 09:22:24 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:27.143972 | orchestrator | 2025-02-19 09:22:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:27.144136 | orchestrator | 2025-02-19 09:22:27 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state STARTED 2025-02-19 09:22:30.184767 | orchestrator | 2025-02-19 09:22:27 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:30.184881 | orchestrator | 2025-02-19 09:22:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:30.184998 | orchestrator | 2025-02-19 09:22:30 | INFO  | Task 9c64599f-8110-4b7a-b4e6-beb69fb4438e is in state SUCCESS 2025-02-19 09:22:30.187256 | orchestrator | 2025-02-19 09:22:30.187296 | orchestrator | 2025-02-19 09:22:30.187309 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:22:30.187322 | orchestrator | 2025-02-19 09:22:30.187335 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-02-19 09:22:30.187348 | orchestrator | Wednesday 19 February 2025 09:09:01 +0000 (0:00:01.150) 0:00:01.150 **** 2025-02-19 09:22:30.187360 | orchestrator | changed: [testbed-manager] 2025-02-19 09:22:30.187374 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.187413 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:22:30.187426 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:22:30.187439 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:22:30.187451 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:22:30.187464 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:22:30.187476 | orchestrator | 2025-02-19 09:22:30.187489 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:22:30.187878 | orchestrator | Wednesday 19 February 2025 09:09:04 +0000 (0:00:03.267) 0:00:04.417 **** 2025-02-19 09:22:30.187898 | orchestrator | changed: [testbed-manager] 2025-02-19 09:22:30.187911 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.187923 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:22:30.187936 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:22:30.187948 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:22:30.187960 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:22:30.187973 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:22:30.187985 | orchestrator | 2025-02-19 09:22:30.187997 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:22:30.188010 | orchestrator | Wednesday 19 February 2025 09:09:09 +0000 (0:00:04.756) 0:00:09.173 **** 2025-02-19 09:22:30.188022 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-02-19 09:22:30.188084 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-02-19 09:22:30.188100 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-02-19 09:22:30.188113 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-02-19 09:22:30.188172 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-02-19 09:22:30.188191 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-02-19 09:22:30.188205 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-02-19 09:22:30.188218 | orchestrator | 2025-02-19 09:22:30.188232 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-02-19 09:22:30.188245 | orchestrator | 2025-02-19 09:22:30.188259 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-02-19 09:22:30.188272 | orchestrator | Wednesday 19 February 2025 09:09:13 +0000 (0:00:04.138) 0:00:13.312 **** 2025-02-19 09:22:30.188285 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:22:30.188299 | orchestrator | 2025-02-19 09:22:30.188312 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-02-19 09:22:30.188326 | orchestrator | Wednesday 19 February 2025 09:09:16 +0000 (0:00:03.142) 0:00:16.454 **** 2025-02-19 09:22:30.188341 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-02-19 09:22:30.188355 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-02-19 09:22:30.188417 | orchestrator | 2025-02-19 09:22:30.188433 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-02-19 09:22:30.188446 | orchestrator | Wednesday 19 February 2025 09:09:23 +0000 (0:00:06.709) 0:00:23.164 **** 2025-02-19 09:22:30.188459 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-19 09:22:30.188473 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-19 09:22:30.188486 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.188499 | orchestrator | 2025-02-19 09:22:30.188513 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-02-19 09:22:30.188526 | orchestrator | Wednesday 19 February 2025 09:09:28 +0000 (0:00:05.096) 0:00:28.261 **** 2025-02-19 09:22:30.188575 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.188590 | orchestrator | 2025-02-19 09:22:30.188927 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-02-19 09:22:30.188947 | orchestrator | Wednesday 19 February 2025 09:09:29 +0000 (0:00:00.928) 0:00:29.189 **** 2025-02-19 09:22:30.188960 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.188974 | orchestrator | 2025-02-19 09:22:30.188987 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-02-19 09:22:30.189000 | orchestrator | Wednesday 19 February 2025 09:09:31 +0000 (0:00:01.883) 0:00:31.073 **** 2025-02-19 09:22:30.189013 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.189027 | orchestrator | 2025-02-19 09:22:30.189040 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-19 09:22:30.189054 | orchestrator | Wednesday 19 February 2025 09:09:36 +0000 (0:00:04.574) 0:00:35.647 **** 2025-02-19 09:22:30.189067 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.189081 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.189094 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.189107 | orchestrator | 2025-02-19 09:22:30.189121 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-02-19 09:22:30.189134 | orchestrator | Wednesday 19 February 2025 09:09:37 +0000 (0:00:01.273) 0:00:36.921 **** 2025-02-19 09:22:30.189147 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:22:30.189161 | orchestrator | 2025-02-19 09:22:30.189174 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-02-19 09:22:30.189188 | orchestrator | Wednesday 19 February 2025 09:10:10 +0000 (0:00:33.482) 0:01:10.403 **** 2025-02-19 09:22:30.189201 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.189215 | orchestrator | 2025-02-19 09:22:30.189228 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-02-19 09:22:30.189241 | orchestrator | Wednesday 19 February 2025 09:10:26 +0000 (0:00:15.481) 0:01:25.885 **** 2025-02-19 09:22:30.189254 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:22:30.189268 | orchestrator | 2025-02-19 09:22:30.189364 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-02-19 09:22:30.189412 | orchestrator | Wednesday 19 February 2025 09:10:42 +0000 (0:00:15.887) 0:01:41.773 **** 2025-02-19 09:22:30.189454 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:22:30.189469 | orchestrator | 2025-02-19 09:22:30.189481 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-02-19 09:22:30.189509 | orchestrator | Wednesday 19 February 2025 09:10:43 +0000 (0:00:01.669) 0:01:43.443 **** 2025-02-19 09:22:30.190316 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.190360 | orchestrator | 2025-02-19 09:22:30.190378 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-19 09:22:30.190425 | orchestrator | Wednesday 19 February 2025 09:10:44 +0000 (0:00:00.442) 0:01:43.885 **** 2025-02-19 09:22:30.190441 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:22:30.190455 | orchestrator | 2025-02-19 09:22:30.190470 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-02-19 09:22:30.190484 | orchestrator | Wednesday 19 February 2025 09:10:45 +0000 (0:00:00.816) 0:01:44.701 **** 2025-02-19 09:22:30.190553 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:22:30.190588 | orchestrator | 2025-02-19 09:22:30.190603 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-02-19 09:22:30.190637 | orchestrator | Wednesday 19 February 2025 09:11:04 +0000 (0:00:19.468) 0:02:04.169 **** 2025-02-19 09:22:30.190652 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.190666 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.190680 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.190695 | orchestrator | 2025-02-19 09:22:30.190709 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-02-19 09:22:30.190724 | orchestrator | 2025-02-19 09:22:30.190738 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-02-19 09:22:30.190752 | orchestrator | Wednesday 19 February 2025 09:11:07 +0000 (0:00:02.673) 0:02:06.843 **** 2025-02-19 09:22:30.190766 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:22:30.190780 | orchestrator | 2025-02-19 09:22:30.190794 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-02-19 09:22:30.190808 | orchestrator | Wednesday 19 February 2025 09:11:10 +0000 (0:00:03.342) 0:02:10.186 **** 2025-02-19 09:22:30.190822 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.190837 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.190852 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.190884 | orchestrator | 2025-02-19 09:22:30.190899 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-02-19 09:22:30.190914 | orchestrator | Wednesday 19 February 2025 09:11:12 +0000 (0:00:02.163) 0:02:12.349 **** 2025-02-19 09:22:30.190929 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.190945 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.190960 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.190975 | orchestrator | 2025-02-19 09:22:30.190989 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-02-19 09:22:30.191003 | orchestrator | Wednesday 19 February 2025 09:11:15 +0000 (0:00:02.668) 0:02:15.018 **** 2025-02-19 09:22:30.191031 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.191057 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.191072 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.191086 | orchestrator | 2025-02-19 09:22:30.191101 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-02-19 09:22:30.191128 | orchestrator | Wednesday 19 February 2025 09:11:15 +0000 (0:00:00.495) 0:02:15.513 **** 2025-02-19 09:22:30.191155 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-19 09:22:30.191170 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.191184 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-19 09:22:30.191199 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.191213 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-02-19 09:22:30.191227 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-02-19 09:22:30.191241 | orchestrator | 2025-02-19 09:22:30.191255 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-02-19 09:22:30.191270 | orchestrator | Wednesday 19 February 2025 09:11:24 +0000 (0:00:08.695) 0:02:24.209 **** 2025-02-19 09:22:30.191284 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.191299 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.191313 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.191328 | orchestrator | 2025-02-19 09:22:30.191342 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-02-19 09:22:30.191356 | orchestrator | Wednesday 19 February 2025 09:11:25 +0000 (0:00:01.088) 0:02:25.298 **** 2025-02-19 09:22:30.191370 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-02-19 09:22:30.191404 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.191420 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-19 09:22:30.191434 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.191449 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-19 09:22:30.191473 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.191487 | orchestrator | 2025-02-19 09:22:30.191501 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-02-19 09:22:30.191515 | orchestrator | Wednesday 19 February 2025 09:11:29 +0000 (0:00:03.895) 0:02:29.193 **** 2025-02-19 09:22:30.191530 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.191544 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.191558 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.191572 | orchestrator | 2025-02-19 09:22:30.191586 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-02-19 09:22:30.191601 | orchestrator | Wednesday 19 February 2025 09:11:30 +0000 (0:00:00.612) 0:02:29.806 **** 2025-02-19 09:22:30.191623 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.191642 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.191657 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.191671 | orchestrator | 2025-02-19 09:22:30.191685 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-02-19 09:22:30.191699 | orchestrator | Wednesday 19 February 2025 09:11:31 +0000 (0:00:01.090) 0:02:30.897 **** 2025-02-19 09:22:30.191713 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.191727 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.192148 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.192182 | orchestrator | 2025-02-19 09:22:30.192197 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-02-19 09:22:30.192213 | orchestrator | Wednesday 19 February 2025 09:11:37 +0000 (0:00:06.072) 0:02:36.969 **** 2025-02-19 09:22:30.192227 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.192241 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.192255 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:22:30.192270 | orchestrator | 2025-02-19 09:22:30.192284 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-02-19 09:22:30.192308 | orchestrator | Wednesday 19 February 2025 09:12:00 +0000 (0:00:22.777) 0:02:59.746 **** 2025-02-19 09:22:30.192323 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.192337 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.192352 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:22:30.192366 | orchestrator | 2025-02-19 09:22:30.192380 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-02-19 09:22:30.192421 | orchestrator | Wednesday 19 February 2025 09:12:19 +0000 (0:00:19.089) 0:03:18.836 **** 2025-02-19 09:22:30.192436 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:22:30.192451 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.192465 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.192479 | orchestrator | 2025-02-19 09:22:30.192494 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-02-19 09:22:30.192525 | orchestrator | Wednesday 19 February 2025 09:12:25 +0000 (0:00:05.948) 0:03:24.784 **** 2025-02-19 09:22:30.192540 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.192554 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.192569 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.192583 | orchestrator | 2025-02-19 09:22:30.192597 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-02-19 09:22:30.192611 | orchestrator | Wednesday 19 February 2025 09:12:45 +0000 (0:00:20.193) 0:03:44.978 **** 2025-02-19 09:22:30.192625 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.192639 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.192654 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.192668 | orchestrator | 2025-02-19 09:22:30.192682 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-02-19 09:22:30.192696 | orchestrator | Wednesday 19 February 2025 09:12:53 +0000 (0:00:08.245) 0:03:53.223 **** 2025-02-19 09:22:30.192710 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.192724 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.192750 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.192767 | orchestrator | 2025-02-19 09:22:30.192783 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-02-19 09:22:30.192799 | orchestrator | 2025-02-19 09:22:30.192815 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-19 09:22:30.192831 | orchestrator | Wednesday 19 February 2025 09:12:54 +0000 (0:00:00.973) 0:03:54.196 **** 2025-02-19 09:22:30.192847 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:22:30.192864 | orchestrator | 2025-02-19 09:22:30.192880 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-02-19 09:22:30.192897 | orchestrator | Wednesday 19 February 2025 09:12:58 +0000 (0:00:03.860) 0:03:58.056 **** 2025-02-19 09:22:30.192912 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-02-19 09:22:30.192928 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-02-19 09:22:30.192942 | orchestrator | 2025-02-19 09:22:30.192957 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-02-19 09:22:30.192971 | orchestrator | Wednesday 19 February 2025 09:13:03 +0000 (0:00:05.242) 0:04:03.298 **** 2025-02-19 09:22:30.192999 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-02-19 09:22:30.193027 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-02-19 09:22:30.193042 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-02-19 09:22:30.193057 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-02-19 09:22:30.193071 | orchestrator | 2025-02-19 09:22:30.193085 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-02-19 09:22:30.193099 | orchestrator | Wednesday 19 February 2025 09:13:12 +0000 (0:00:09.218) 0:04:12.517 **** 2025-02-19 09:22:30.193113 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-19 09:22:30.193128 | orchestrator | 2025-02-19 09:22:30.193142 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-02-19 09:22:30.193156 | orchestrator | Wednesday 19 February 2025 09:13:17 +0000 (0:00:04.624) 0:04:17.142 **** 2025-02-19 09:22:30.193170 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-19 09:22:30.193184 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-02-19 09:22:30.193198 | orchestrator | 2025-02-19 09:22:30.193212 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-02-19 09:22:30.193226 | orchestrator | Wednesday 19 February 2025 09:13:22 +0000 (0:00:05.145) 0:04:22.287 **** 2025-02-19 09:22:30.193241 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-19 09:22:30.193255 | orchestrator | 2025-02-19 09:22:30.193270 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-02-19 09:22:30.193284 | orchestrator | Wednesday 19 February 2025 09:13:26 +0000 (0:00:04.159) 0:04:26.447 **** 2025-02-19 09:22:30.193298 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-02-19 09:22:30.193312 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-02-19 09:22:30.193326 | orchestrator | 2025-02-19 09:22:30.193340 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-02-19 09:22:30.193545 | orchestrator | Wednesday 19 February 2025 09:13:39 +0000 (0:00:12.229) 0:04:38.677 **** 2025-02-19 09:22:30.193615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.193647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.193673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.193690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.193800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.193823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.193864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.193881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.193896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.193911 | orchestrator | 2025-02-19 09:22:30.193926 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-02-19 09:22:30.193940 | orchestrator | Wednesday 19 February 2025 09:13:44 +0000 (0:00:05.310) 0:04:43.987 **** 2025-02-19 09:22:30.193954 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.193969 | orchestrator | 2025-02-19 09:22:30.193983 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-02-19 09:22:30.194011 | orchestrator | Wednesday 19 February 2025 09:13:44 +0000 (0:00:00.402) 0:04:44.390 **** 2025-02-19 09:22:30.194063 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.194080 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.194095 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.194109 | orchestrator | 2025-02-19 09:22:30.194138 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-02-19 09:22:30.194160 | orchestrator | Wednesday 19 February 2025 09:13:46 +0000 (0:00:02.217) 0:04:46.607 **** 2025-02-19 09:22:30.194179 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:22:30.194193 | orchestrator | 2025-02-19 09:22:30.194244 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-02-19 09:22:30.194261 | orchestrator | Wednesday 19 February 2025 09:13:49 +0000 (0:00:02.818) 0:04:49.425 **** 2025-02-19 09:22:30.194275 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.194460 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.194487 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.194512 | orchestrator | 2025-02-19 09:22:30.194526 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-02-19 09:22:30.194541 | orchestrator | Wednesday 19 February 2025 09:13:52 +0000 (0:00:02.611) 0:04:52.036 **** 2025-02-19 09:22:30.194562 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:22:30.194586 | orchestrator | 2025-02-19 09:22:30.194607 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-02-19 09:22:30.194630 | orchestrator | Wednesday 19 February 2025 09:13:55 +0000 (0:00:03.461) 0:04:55.498 **** 2025-02-19 09:22:30.194648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.194678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.194779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.194832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.194847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.194861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.194874 | orchestrator | 2025-02-19 09:22:30.194887 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-02-19 09:22:30.194901 | orchestrator | Wednesday 19 February 2025 09:14:00 +0000 (0:00:04.286) 0:04:59.784 **** 2025-02-19 09:22:30.194914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:22:30.194929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.194947 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.195048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:22:30.195069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.195083 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.195097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:22:30.195120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.195141 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.195154 | orchestrator | 2025-02-19 09:22:30.195166 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-02-19 09:22:30.195179 | orchestrator | Wednesday 19 February 2025 09:14:02 +0000 (0:00:02.205) 0:05:01.989 **** 2025-02-19 09:22:30.195259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:22:30.195279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.195293 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.195322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:22:30.195349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.195371 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.195510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:22:30.195535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.195549 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.195562 | orchestrator | 2025-02-19 09:22:30.195574 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-02-19 09:22:30.195587 | orchestrator | Wednesday 19 February 2025 09:14:04 +0000 (0:00:01.814) 0:05:03.803 **** 2025-02-19 09:22:30.195600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.195614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.195724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.195744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.195758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.195771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.195792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.195820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.195900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.195919 | orchestrator | 2025-02-19 09:22:30.195932 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-02-19 09:22:30.195944 | orchestrator | Wednesday 19 February 2025 09:14:07 +0000 (0:00:03.361) 0:05:07.165 **** 2025-02-19 09:22:30.195957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.195971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.196005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.196084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.196103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.196129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.196178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196191 | orchestrator | 2025-02-19 09:22:30.196204 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-02-19 09:22:30.196217 | orchestrator | Wednesday 19 February 2025 09:14:23 +0000 (0:00:15.470) 0:05:22.635 **** 2025-02-19 09:22:30.196295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:22:30.196314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196349 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.196374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:22:30.196410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196477 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.196490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-19 09:22:30.196503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196538 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.196550 | orchestrator | 2025-02-19 09:22:30.196563 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-02-19 09:22:30.196576 | orchestrator | Wednesday 19 February 2025 09:14:25 +0000 (0:00:02.486) 0:05:25.122 **** 2025-02-19 09:22:30.196588 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.196601 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:22:30.196613 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:22:30.196626 | orchestrator | 2025-02-19 09:22:30.196638 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-02-19 09:22:30.196651 | orchestrator | Wednesday 19 February 2025 09:14:29 +0000 (0:00:04.033) 0:05:29.155 **** 2025-02-19 09:22:30.196663 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.196676 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.196688 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.196700 | orchestrator | 2025-02-19 09:22:30.196713 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-02-19 09:22:30.196725 | orchestrator | Wednesday 19 February 2025 09:14:31 +0000 (0:00:01.861) 0:05:31.017 **** 2025-02-19 09:22:30.196778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.196796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.196816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.196830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-19 09:22:30.196897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.196911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.196946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.196961 | orchestrator | 2025-02-19 09:22:30.196974 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-02-19 09:22:30.196988 | orchestrator | Wednesday 19 February 2025 09:14:35 +0000 (0:00:04.120) 0:05:35.138 **** 2025-02-19 09:22:30.197002 | orchestrator | 2025-02-19 09:22:30.197015 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-02-19 09:22:30.197029 | orchestrator | Wednesday 19 February 2025 09:14:35 +0000 (0:00:00.197) 0:05:35.335 **** 2025-02-19 09:22:30.197043 | orchestrator | 2025-02-19 09:22:30.197058 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-02-19 09:22:30.197072 | orchestrator | Wednesday 19 February 2025 09:14:36 +0000 (0:00:00.378) 0:05:35.714 **** 2025-02-19 09:22:30.197085 | orchestrator | 2025-02-19 09:22:30.197100 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-02-19 09:22:30.197114 | orchestrator | Wednesday 19 February 2025 09:14:36 +0000 (0:00:00.131) 0:05:35.845 **** 2025-02-19 09:22:30.197127 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.197141 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:22:30.197155 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:22:30.197169 | orchestrator | 2025-02-19 09:22:30.197183 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-02-19 09:22:30.197201 | orchestrator | Wednesday 19 February 2025 09:14:50 +0000 (0:00:14.701) 0:05:50.550 **** 2025-02-19 09:22:30.197216 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.197230 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:22:30.197244 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:22:30.197258 | orchestrator | 2025-02-19 09:22:30.197272 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-02-19 09:22:30.197285 | orchestrator | 2025-02-19 09:22:30.197297 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-19 09:22:30.197310 | orchestrator | Wednesday 19 February 2025 09:14:59 +0000 (0:00:08.742) 0:05:59.293 **** 2025-02-19 09:22:30.197323 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:22:30.197336 | orchestrator | 2025-02-19 09:22:30.197349 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-19 09:22:30.197361 | orchestrator | Wednesday 19 February 2025 09:15:01 +0000 (0:00:01.460) 0:06:00.753 **** 2025-02-19 09:22:30.197470 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:22:30.197490 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:22:30.197503 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:22:30.197516 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.197526 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.197536 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.197546 | orchestrator | 2025-02-19 09:22:30.197557 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-02-19 09:22:30.197567 | orchestrator | Wednesday 19 February 2025 09:15:01 +0000 (0:00:00.716) 0:06:01.470 **** 2025-02-19 09:22:30.197578 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.197588 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.197598 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.197608 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:22:30.197619 | orchestrator | 2025-02-19 09:22:30.197629 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-19 09:22:30.197639 | orchestrator | Wednesday 19 February 2025 09:15:02 +0000 (0:00:01.121) 0:06:02.591 **** 2025-02-19 09:22:30.197650 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-02-19 09:22:30.197660 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-02-19 09:22:30.197671 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-02-19 09:22:30.197681 | orchestrator | 2025-02-19 09:22:30.197692 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-19 09:22:30.197703 | orchestrator | Wednesday 19 February 2025 09:15:03 +0000 (0:00:00.768) 0:06:03.360 **** 2025-02-19 09:22:30.197713 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-02-19 09:22:30.197724 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-02-19 09:22:30.197735 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-02-19 09:22:30.197745 | orchestrator | 2025-02-19 09:22:30.197756 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-19 09:22:30.197766 | orchestrator | Wednesday 19 February 2025 09:15:05 +0000 (0:00:01.462) 0:06:04.823 **** 2025-02-19 09:22:30.197776 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-02-19 09:22:30.197786 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:22:30.197797 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-02-19 09:22:30.197807 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:22:30.197817 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-02-19 09:22:30.197828 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:22:30.197838 | orchestrator | 2025-02-19 09:22:30.197848 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-02-19 09:22:30.197858 | orchestrator | Wednesday 19 February 2025 09:15:05 +0000 (0:00:00.726) 0:06:05.549 **** 2025-02-19 09:22:30.197869 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-19 09:22:30.197879 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-19 09:22:30.197889 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.197899 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-19 09:22:30.197910 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-19 09:22:30.197920 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-02-19 09:22:30.197930 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-02-19 09:22:30.197941 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.197951 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-19 09:22:30.197961 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-19 09:22:30.197971 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.197982 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-02-19 09:22:30.198001 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-02-19 09:22:30.198039 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-02-19 09:22:30.198051 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-02-19 09:22:30.198063 | orchestrator | 2025-02-19 09:22:30.198073 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-02-19 09:22:30.198083 | orchestrator | Wednesday 19 February 2025 09:15:08 +0000 (0:00:02.343) 0:06:07.892 **** 2025-02-19 09:22:30.198093 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.198103 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:22:30.198113 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:22:30.198124 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.198134 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:22:30.198144 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.198154 | orchestrator | 2025-02-19 09:22:30.198165 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-02-19 09:22:30.198182 | orchestrator | Wednesday 19 February 2025 09:15:09 +0000 (0:00:01.624) 0:06:09.516 **** 2025-02-19 09:22:30.198193 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.198203 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.198213 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.198223 | orchestrator | changed: [testbed-node-3] 2025-02-19 09:22:30.198233 | orchestrator | changed: [testbed-node-5] 2025-02-19 09:22:30.198243 | orchestrator | changed: [testbed-node-4] 2025-02-19 09:22:30.198253 | orchestrator | 2025-02-19 09:22:30.198263 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-02-19 09:22:30.198273 | orchestrator | Wednesday 19 February 2025 09:15:11 +0000 (0:00:01.969) 0:06:11.486 **** 2025-02-19 09:22:30.198326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.198416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.198462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.198476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.198494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.198505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.198522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.198533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.198552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.198600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.198611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.198627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.198639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.198694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.198716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.198733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.198754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.198786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.198809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.198836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.198847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.198858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.198869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.198916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.198936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.198964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.198974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.199008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.199032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.199057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.199080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.199166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199194 | orchestrator | 2025-02-19 09:22:30.199204 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-19 09:22:30.199215 | orchestrator | Wednesday 19 February 2025 09:15:15 +0000 (0:00:03.318) 0:06:14.804 **** 2025-02-19 09:22:30.199226 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:22:30.199238 | orchestrator | 2025-02-19 09:22:30.199248 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-02-19 09:22:30.199258 | orchestrator | Wednesday 19 February 2025 09:15:17 +0000 (0:00:01.979) 0:06:16.783 **** 2025-02-19 09:22:30.199269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199419 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199599 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.199649 | orchestrator | 2025-02-19 09:22:30.199660 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-02-19 09:22:30.199670 | orchestrator | Wednesday 19 February 2025 09:15:21 +0000 (0:00:04.534) 0:06:21.318 **** 2025-02-19 09:22:30.199681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.199692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.199736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.199758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.199769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.199780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.199790 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:22:30.199801 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:22:30.199811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.199830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.199871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.199883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.199894 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.199910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.199921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.199931 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:22:30.199942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.199960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.199998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200011 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.200021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.200032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200053 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.200063 | orchestrator | 2025-02-19 09:22:30.200073 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-02-19 09:22:30.200084 | orchestrator | Wednesday 19 February 2025 09:15:24 +0000 (0:00:02.542) 0:06:23.860 **** 2025-02-19 09:22:30.200098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.200119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.200153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200165 | orchestrator | skipping: [testbed-node-5] 2025-02-19 09:22:30.200175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.200186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.200205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200216 | orchestrator | skipping: [testbed-node-3] 2025-02-19 09:22:30.200227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.200249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.200283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200295 | orchestrator | skipping: [testbed-node-4] 2025-02-19 09:22:30.200305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.200316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.200362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200373 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.200467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200484 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.200495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.200506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.200528 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.200538 | orchestrator | 2025-02-19 09:22:30.200548 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-19 09:22:30.200558 | orchestrator | Wednesday 19 February 2025 09:15:27 +0000 (0:00:03.512) 0:06:27.373 **** 2025-02-19 09:22:30.200576 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.200586 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.200596 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.200607 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-19 09:22:30.200617 | orchestrator | 2025-02-19 09:22:30.200627 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-02-19 09:22:30.200638 | orchestrator | Wednesday 19 February 2025 09:15:29 +0000 (0:00:02.236) 0:06:29.610 **** 2025-02-19 09:22:30.200648 | orchestrator | fatal: [testbed-node-3 -> localhost]: FAILED! => {"msg": "No file was found when using first_found."} 2025-02-19 09:22:30.200658 | orchestrator | fatal: [testbed-node-4 -> localhost]: FAILED! => {"msg": "No file was found when using first_found."} 2025-02-19 09:22:30.200669 | orchestrator | fatal: [testbed-node-5 -> localhost]: FAILED! => {"msg": "No file was found when using first_found."} 2025-02-19 09:22:30.200679 | orchestrator | 2025-02-19 09:22:30.200687 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-02-19 09:22:30.200696 | orchestrator | Wednesday 19 February 2025 09:15:30 +0000 (0:00:00.915) 0:06:30.525 **** 2025-02-19 09:22:30.200705 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.200714 | orchestrator | 2025-02-19 09:22:30.200722 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-02-19 09:22:30.200731 | orchestrator | Wednesday 19 February 2025 09:15:31 +0000 (0:00:00.332) 0:06:30.858 **** 2025-02-19 09:22:30.200739 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.200748 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.200757 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.200766 | orchestrator | 2025-02-19 09:22:30.200774 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-02-19 09:22:30.200783 | orchestrator | Wednesday 19 February 2025 09:15:31 +0000 (0:00:00.599) 0:06:31.458 **** 2025-02-19 09:22:30.200791 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-19 09:22:30.200800 | orchestrator | 2025-02-19 09:22:30.200809 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-02-19 09:22:30.200817 | orchestrator | Wednesday 19 February 2025 09:15:32 +0000 (0:00:00.737) 0:06:32.196 **** 2025-02-19 09:22:30.200826 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.200834 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.200843 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.200852 | orchestrator | 2025-02-19 09:22:30.200860 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-02-19 09:22:30.200889 | orchestrator | Wednesday 19 February 2025 09:15:33 +0000 (0:00:00.568) 0:06:32.764 **** 2025-02-19 09:22:30.200908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.200918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.200934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.200943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.200952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.200991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.201002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.201011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.201029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.201039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.201056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.201086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.201128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.201137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.201146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.201168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.201198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201226 | orchestrator | 2025-02-19 09:22:30.201234 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-02-19 09:22:30.201243 | orchestrator | Wednesday 19 February 2025 09:15:38 +0000 (0:00:05.722) 0:06:38.487 **** 2025-02-19 09:22:30.201252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.201267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.201290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.201299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.201308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.201317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.201326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.201356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.201372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.201413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.201422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.201444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.201459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.201484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.201503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.201531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.201566 | orchestrator | 2025-02-19 09:22:30.201575 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-02-19 09:22:30.201584 | orchestrator | Wednesday 19 February 2025 09:15:55 +0000 (0:00:16.571) 0:06:55.058 **** 2025-02-19 09:22:30.201593 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.201601 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.201610 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.201618 | orchestrator | 2025-02-19 09:22:30.201627 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-02-19 09:22:30.201636 | orchestrator | Wednesday 19 February 2025 09:15:57 +0000 (0:00:01.729) 0:06:56.788 **** 2025-02-19 09:22:30.201645 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-02-19 09:22:30.201657 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-02-19 09:22:30.201665 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-02-19 09:22:30.201674 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-02-19 09:22:30.201683 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.201691 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-02-19 09:22:30.201700 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.201708 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-02-19 09:22:30.201717 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.201726 | orchestrator | 2025-02-19 09:22:30.201734 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-02-19 09:22:30.201743 | orchestrator | Wednesday 19 February 2025 09:15:59 +0000 (0:00:02.297) 0:06:59.085 **** 2025-02-19 09:22:30.201757 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.201766 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.201774 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.201783 | orchestrator | 2025-02-19 09:22:30.201792 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-02-19 09:22:30.201800 | orchestrator | Wednesday 19 February 2025 09:15:59 +0000 (0:00:00.422) 0:06:59.508 **** 2025-02-19 09:22:30.201809 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-02-19 09:22:30.201818 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-02-19 09:22:30.201827 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-02-19 09:22:30.201835 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-02-19 09:22:30.201848 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-02-19 09:22:30.201857 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-02-19 09:22:30.201865 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.201874 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-02-19 09:22:30.201882 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.201891 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-02-19 09:22:30.201900 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-02-19 09:22:30.201909 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.201917 | orchestrator | 2025-02-19 09:22:30.201926 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-02-19 09:22:30.201934 | orchestrator | Wednesday 19 February 2025 09:16:05 +0000 (0:00:05.537) 0:07:05.045 **** 2025-02-19 09:22:30.201943 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-02-19 09:22:30.201951 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-02-19 09:22:30.201960 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-02-19 09:22:30.201968 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-02-19 09:22:30.201977 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-02-19 09:22:30.201985 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-02-19 09:22:30.201994 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-19 09:22:30.202002 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-19 09:22:30.202011 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-19 09:22:30.202040 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-02-19 09:22:30.202051 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.202059 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-02-19 09:22:30.202068 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.202077 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-02-19 09:22:30.202085 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.202094 | orchestrator | 2025-02-19 09:22:30.202103 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-02-19 09:22:30.202112 | orchestrator | Wednesday 19 February 2025 09:16:11 +0000 (0:00:06.144) 0:07:11.190 **** 2025-02-19 09:22:30.202127 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.202136 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.202144 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.202152 | orchestrator | 2025-02-19 09:22:30.202161 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-02-19 09:22:30.202170 | orchestrator | Wednesday 19 February 2025 09:16:12 +0000 (0:00:00.958) 0:07:12.148 **** 2025-02-19 09:22:30.202178 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.202186 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.202195 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.202204 | orchestrator | 2025-02-19 09:22:30.202212 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-02-19 09:22:30.202221 | orchestrator | Wednesday 19 February 2025 09:16:14 +0000 (0:00:01.593) 0:07:13.742 **** 2025-02-19 09:22:30.202229 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.202238 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.202247 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.202255 | orchestrator | 2025-02-19 09:22:30.202264 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-02-19 09:22:30.202272 | orchestrator | Wednesday 19 February 2025 09:16:18 +0000 (0:00:04.685) 0:07:18.427 **** 2025-02-19 09:22:30.202281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.202296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.202305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.202314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.202329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.202338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202378 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.202403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.202413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.202427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.202436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.202445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.202461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202493 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.202502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.202522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.202532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.202541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.202558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.202568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202600 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.202608 | orchestrator | 2025-02-19 09:22:30.202617 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-02-19 09:22:30.202626 | orchestrator | Wednesday 19 February 2025 09:16:22 +0000 (0:00:03.939) 0:07:22.367 **** 2025-02-19 09:22:30.202635 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-02-19 09:22:30.202643 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-02-19 09:22:30.202652 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.202661 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-02-19 09:22:30.202669 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-02-19 09:22:30.202678 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.202690 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-02-19 09:22:30.202699 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-02-19 09:22:30.202707 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.202716 | orchestrator | 2025-02-19 09:22:30.202725 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-02-19 09:22:30.202733 | orchestrator | Wednesday 19 February 2025 09:16:23 +0000 (0:00:00.901) 0:07:23.268 **** 2025-02-19 09:22:30.202749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.202763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.202772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.202785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.202801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-02-19 09:22:30.202810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-02-19 09:22:30.202819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.202832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.202846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.202855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.202864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.202873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.202882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-02-19 09:22:30.202891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-02-19 09:22:30.202900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-19 09:22:30.202913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.202934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.202953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.202970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-02-19 09:22:30.202996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.203006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.203015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-02-19 09:22:30.203024 | orchestrator | 2025-02-19 09:22:30.203033 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-02-19 09:22:30.203041 | orchestrator | Wednesday 19 February 2025 09:16:27 +0000 (0:00:03.856) 0:07:27.125 **** 2025-02-19 09:22:30.203050 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.203062 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:22:30.203071 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.203080 | orchestrator | 2025-02-19 09:22:30.203088 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-19 09:22:30.203097 | orchestrator | Wednesday 19 February 2025 09:16:27 +0000 (0:00:00.386) 0:07:27.512 **** 2025-02-19 09:22:30.203105 | orchestrator | 2025-02-19 09:22:30.203114 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-19 09:22:30.203122 | orchestrator | Wednesday 19 February 2025 09:16:28 +0000 (0:00:00.112) 0:07:27.624 **** 2025-02-19 09:22:30.203130 | orchestrator | 2025-02-19 09:22:30.203139 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-02-19 09:22:30.203148 | orchestrator | Wednesday 19 February 2025 09:16:28 +0000 (0:00:00.318) 0:07:27.943 **** 2025-02-19 09:22:30.203156 | orchestrator | 2025-02-19 09:22:30.203165 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-02-19 09:22:30.203173 | orchestrator | Wednesday 19 February 2025 09:16:28 +0000 (0:00:00.205) 0:07:28.148 **** 2025-02-19 09:22:30.203182 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.203190 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:22:30.203199 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:22:30.203208 | orchestrator | 2025-02-19 09:22:30.203216 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-02-19 09:22:30.203225 | orchestrator | Wednesday 19 February 2025 09:16:44 +0000 (0:00:15.507) 0:07:43.655 **** 2025-02-19 09:22:30.203233 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.203242 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:22:30.203250 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:22:30.203258 | orchestrator | 2025-02-19 09:22:30.203272 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute-ironic container] ************ 2025-02-19 09:22:30.203281 | orchestrator | Wednesday 19 February 2025 09:17:04 +0000 (0:00:20.927) 0:08:04.583 **** 2025-02-19 09:22:30.203289 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:22:30.203298 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:22:30.203306 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:22:30.203315 | orchestrator | 2025-02-19 09:22:30.203323 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-02-19 09:22:30.203332 | orchestrator | Wednesday 19 February 2025 09:17:18 +0000 (0:00:13.218) 0:08:17.802 **** 2025-02-19 09:22:30.203340 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.203349 | orchestrator | 2025-02-19 09:22:30.203358 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-02-19 09:22:30.203366 | orchestrator | Wednesday 19 February 2025 09:17:18 +0000 (0:00:00.614) 0:08:18.416 **** 2025-02-19 09:22:30.203374 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:22:30.203494 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:22:30.203537 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-02-19 09:22:30.203547 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (19 retries left). 2025-02-19 09:22:30.203564 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (18 retries left). 2025-02-19 09:22:30.203573 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (17 retries left). 2025-02-19 09:22:30.203588 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (16 retries left). 2025-02-19 09:22:30.203597 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (15 retries left). 2025-02-19 09:22:30.203606 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (14 retries left). 2025-02-19 09:22:30.203614 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (13 retries left). 2025-02-19 09:22:30.203623 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (12 retries left). 2025-02-19 09:22:30.203632 | orchestrator | 2025-02-19 09:22:30.203641 | orchestrator | STILL ALIVE [task 'nova-cell : Waiting for nova-compute services to register themselves' is running] *** 2025-02-19 09:22:30.203649 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (11 retries left). 2025-02-19 09:22:30.203658 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (10 retries left). 2025-02-19 09:22:30.203666 | orchestrator | 2025-02-19 09:22:30.203675 | orchestrator | STILL ALIVE [task 'nova-cell : Waiting for nova-compute services to register themselves' is running] *** 2025-02-19 09:22:30.203684 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (9 retries left). 2025-02-19 09:22:30.203692 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (8 retries left). 2025-02-19 09:22:30.203701 | orchestrator | 2025-02-19 09:22:30.203709 | orchestrator | STILL ALIVE [task 'nova-cell : Waiting for nova-compute services to register themselves' is running] *** 2025-02-19 09:22:30.203717 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (7 retries left). 2025-02-19 09:22:30.203726 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (6 retries left). 2025-02-19 09:22:30.203742 | orchestrator | 2025-02-19 09:22:30.203752 | orchestrator | STILL ALIVE [task 'nova-cell : Waiting for nova-compute services to register themselves' is running] *** 2025-02-19 09:22:30.203760 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (5 retries left). 2025-02-19 09:22:30.203769 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (4 retries left). 2025-02-19 09:22:30.203777 | orchestrator | 2025-02-19 09:22:30.203786 | orchestrator | STILL ALIVE [task 'nova-cell : Waiting for nova-compute services to register themselves' is running] *** 2025-02-19 09:22:30.203795 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (3 retries left). 2025-02-19 09:22:30.203804 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (2 retries left). 2025-02-19 09:22:30.203812 | orchestrator | FAILED - RETRYING: [testbed-node-1 -> testbed-node-0]: Waiting for nova-compute services to register themselves (1 retries left). 2025-02-19 09:22:30.203821 | orchestrator | 2025-02-19 09:22:30.203829 | orchestrator | STILL ALIVE [task 'nova-cell : Waiting for nova-compute services to register themselves' is running] *** 2025-02-19 09:22:30.203838 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2025-02-19 09:22:30.203847 | orchestrator | 2025-02-19 09:22:30.203855 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-02-19 09:22:30.203870 | orchestrator | Wednesday 19 February 2025 09:22:13 +0000 (0:04:55.096) 0:13:13.512 **** 2025-02-19 09:22:30.203879 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "msg": "The Nova compute service failed to register itself on the following hosts: testbed-node-0-ironic,testbed-node-2-ironic,testbed-node-1-ironic"} 2025-02-19 09:22:30.203888 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "The Nova compute service failed to register itself on the following hosts: testbed-node-0-ironic,testbed-node-2-ironic,testbed-node-1-ironic"} 2025-02-19 09:22:30.203896 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "msg": "The Nova compute service failed to register itself on the following hosts: testbed-node-0-ironic,testbed-node-2-ironic,testbed-node-1-ironic"} 2025-02-19 09:22:30.203904 | orchestrator | 2025-02-19 09:22:30.203919 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:22:33.233915 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:22:33.234136 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=1  skipped=39  rescued=0 ignored=0 2025-02-19 09:22:33.234162 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=1  skipped=44  rescued=0 ignored=0 2025-02-19 09:22:33.234178 | orchestrator | testbed-node-2 : ok=26  changed=19  unreachable=0 failed=1  skipped=45  rescued=0 ignored=0 2025-02-19 09:22:33.234193 | orchestrator | testbed-node-3 : ok=14  changed=9  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2025-02-19 09:22:33.234207 | orchestrator | testbed-node-4 : ok=14  changed=9  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2025-02-19 09:22:33.234221 | orchestrator | testbed-node-5 : ok=14  changed=9  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2025-02-19 09:22:33.234235 | orchestrator | 2025-02-19 09:22:33.234275 | orchestrator | 2025-02-19 09:22:33.234291 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:22:33.234306 | orchestrator | Wednesday 19 February 2025 09:22:28 +0000 (0:00:14.899) 0:13:28.412 **** 2025-02-19 09:22:33.234320 | orchestrator | =============================================================================== 2025-02-19 09:22:33.234334 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves -- 295.10s 2025-02-19 09:22:33.234348 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.48s 2025-02-19 09:22:33.234363 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.77s 2025-02-19 09:22:33.234377 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 20.93s 2025-02-19 09:22:33.234473 | orchestrator | nova-cell : Create cell ------------------------------------------------ 20.19s 2025-02-19 09:22:33.234501 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.47s 2025-02-19 09:22:33.234525 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 19.09s 2025-02-19 09:22:33.234549 | orchestrator | nova-cell : Copying over nova.conf ------------------------------------- 16.57s 2025-02-19 09:22:33.234572 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.89s 2025-02-19 09:22:33.234596 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 15.51s 2025-02-19 09:22:33.234621 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.48s 2025-02-19 09:22:33.234645 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 15.47s 2025-02-19 09:22:33.234671 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 14.90s 2025-02-19 09:22:33.234718 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 14.71s 2025-02-19 09:22:33.234738 | orchestrator | nova-cell : Restart nova-compute-ironic container ---------------------- 13.22s 2025-02-19 09:22:33.234755 | orchestrator | service-ks-register : nova | Granting user roles ----------------------- 12.23s 2025-02-19 09:22:33.234771 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 9.22s 2025-02-19 09:22:33.234787 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.74s 2025-02-19 09:22:33.234802 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.70s 2025-02-19 09:22:33.234816 | orchestrator | nova-cell : Update cell ------------------------------------------------- 8.25s 2025-02-19 09:22:33.234831 | orchestrator | 2025-02-19 09:22:30 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:33.234845 | orchestrator | 2025-02-19 09:22:30 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:33.234878 | orchestrator | 2025-02-19 09:22:33 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:36.288688 | orchestrator | 2025-02-19 09:22:33 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:36.288818 | orchestrator | 2025-02-19 09:22:36 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:36.293020 | orchestrator | 2025-02-19 09:22:36 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:39.347063 | orchestrator | 2025-02-19 09:22:39 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:42.395270 | orchestrator | 2025-02-19 09:22:39 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:42.395440 | orchestrator | 2025-02-19 09:22:42 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:42.397151 | orchestrator | 2025-02-19 09:22:42 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:45.443851 | orchestrator | 2025-02-19 09:22:45 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:48.481042 | orchestrator | 2025-02-19 09:22:45 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:48.481199 | orchestrator | 2025-02-19 09:22:48 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:51.514761 | orchestrator | 2025-02-19 09:22:48 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:51.514875 | orchestrator | 2025-02-19 09:22:51 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:54.553785 | orchestrator | 2025-02-19 09:22:51 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:54.553928 | orchestrator | 2025-02-19 09:22:54 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:22:57.587044 | orchestrator | 2025-02-19 09:22:54 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:22:57.587178 | orchestrator | 2025-02-19 09:22:57 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:00.626634 | orchestrator | 2025-02-19 09:22:57 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:00.626811 | orchestrator | 2025-02-19 09:23:00 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:03.661288 | orchestrator | 2025-02-19 09:23:00 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:03.661502 | orchestrator | 2025-02-19 09:23:03 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:06.695282 | orchestrator | 2025-02-19 09:23:03 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:06.695455 | orchestrator | 2025-02-19 09:23:06 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:09.729197 | orchestrator | 2025-02-19 09:23:06 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:09.729295 | orchestrator | 2025-02-19 09:23:09 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:12.772528 | orchestrator | 2025-02-19 09:23:09 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:12.772667 | orchestrator | 2025-02-19 09:23:12 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:15.814579 | orchestrator | 2025-02-19 09:23:12 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:15.814730 | orchestrator | 2025-02-19 09:23:15 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:18.853139 | orchestrator | 2025-02-19 09:23:15 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:18.853270 | orchestrator | 2025-02-19 09:23:18 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:21.901070 | orchestrator | 2025-02-19 09:23:18 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:21.901261 | orchestrator | 2025-02-19 09:23:21 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:24.952947 | orchestrator | 2025-02-19 09:23:21 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:24.953086 | orchestrator | 2025-02-19 09:23:24 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:27.992882 | orchestrator | 2025-02-19 09:23:24 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:27.993031 | orchestrator | 2025-02-19 09:23:27 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:31.033039 | orchestrator | 2025-02-19 09:23:27 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:31.033178 | orchestrator | 2025-02-19 09:23:31 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:34.078348 | orchestrator | 2025-02-19 09:23:31 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:34.078556 | orchestrator | 2025-02-19 09:23:34 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:37.112928 | orchestrator | 2025-02-19 09:23:34 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:37.113040 | orchestrator | 2025-02-19 09:23:37 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:40.155780 | orchestrator | 2025-02-19 09:23:37 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:40.155901 | orchestrator | 2025-02-19 09:23:40 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:43.210904 | orchestrator | 2025-02-19 09:23:40 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:43.211043 | orchestrator | 2025-02-19 09:23:43 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:46.259152 | orchestrator | 2025-02-19 09:23:43 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:46.259300 | orchestrator | 2025-02-19 09:23:46 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:49.287486 | orchestrator | 2025-02-19 09:23:46 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:49.287633 | orchestrator | 2025-02-19 09:23:49 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:52.320212 | orchestrator | 2025-02-19 09:23:49 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:52.320319 | orchestrator | 2025-02-19 09:23:52 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:55.354904 | orchestrator | 2025-02-19 09:23:52 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:55.355040 | orchestrator | 2025-02-19 09:23:55 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:23:58.381695 | orchestrator | 2025-02-19 09:23:55 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:23:58.381845 | orchestrator | 2025-02-19 09:23:58 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:24:01.429592 | orchestrator | 2025-02-19 09:23:58 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:24:01.429734 | orchestrator | 2025-02-19 09:24:01 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:24:04.462316 | orchestrator | 2025-02-19 09:24:01 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:24:04.462547 | orchestrator | 2025-02-19 09:24:04 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state STARTED 2025-02-19 09:24:07.502130 | orchestrator | 2025-02-19 09:24:04 | INFO  | Wait 1 second(s) until the next check 2025-02-19 09:24:07.502267 | orchestrator | 2025-02-19 09:24:07 | INFO  | Task 4f7d4443-c727-4112-9bde-7ea9808b6590 is in state SUCCESS 2025-02-19 09:24:07.502968 | orchestrator | 2025-02-19 09:24:07.503001 | orchestrator | 2025-02-19 09:24:07.503016 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-19 09:24:07.503031 | orchestrator | 2025-02-19 09:24:07.503045 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-19 09:24:07.503060 | orchestrator | Wednesday 19 February 2025 09:18:42 +0000 (0:00:00.505) 0:00:00.505 **** 2025-02-19 09:24:07.503074 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:24:07.503091 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:24:07.503105 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:24:07.503119 | orchestrator | 2025-02-19 09:24:07.503133 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-19 09:24:07.503148 | orchestrator | Wednesday 19 February 2025 09:18:43 +0000 (0:00:00.443) 0:00:00.948 **** 2025-02-19 09:24:07.503189 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-02-19 09:24:07.503205 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-02-19 09:24:07.503219 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-02-19 09:24:07.503233 | orchestrator | 2025-02-19 09:24:07.503247 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-02-19 09:24:07.503260 | orchestrator | 2025-02-19 09:24:07.503274 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-19 09:24:07.503288 | orchestrator | Wednesday 19 February 2025 09:18:43 +0000 (0:00:00.311) 0:00:01.260 **** 2025-02-19 09:24:07.503303 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:24:07.503318 | orchestrator | 2025-02-19 09:24:07.503332 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-02-19 09:24:07.503346 | orchestrator | Wednesday 19 February 2025 09:18:44 +0000 (0:00:00.785) 0:00:02.045 **** 2025-02-19 09:24:07.503361 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-02-19 09:24:07.503374 | orchestrator | 2025-02-19 09:24:07.503388 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-02-19 09:24:07.503436 | orchestrator | Wednesday 19 February 2025 09:18:49 +0000 (0:00:04.477) 0:00:06.523 **** 2025-02-19 09:24:07.503452 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-02-19 09:24:07.503467 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-02-19 09:24:07.503481 | orchestrator | 2025-02-19 09:24:07.503495 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-02-19 09:24:07.503509 | orchestrator | Wednesday 19 February 2025 09:18:55 +0000 (0:00:06.655) 0:00:13.178 **** 2025-02-19 09:24:07.503523 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-02-19 09:24:07.503537 | orchestrator | 2025-02-19 09:24:07.503551 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-02-19 09:24:07.503567 | orchestrator | Wednesday 19 February 2025 09:19:00 +0000 (0:00:04.925) 0:00:18.104 **** 2025-02-19 09:24:07.503584 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-02-19 09:24:07.503601 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-02-19 09:24:07.503617 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-02-19 09:24:07.503633 | orchestrator | 2025-02-19 09:24:07.503648 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-02-19 09:24:07.503664 | orchestrator | Wednesday 19 February 2025 09:19:10 +0000 (0:00:09.591) 0:00:27.695 **** 2025-02-19 09:24:07.503679 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-02-19 09:24:07.503695 | orchestrator | 2025-02-19 09:24:07.503710 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-02-19 09:24:07.503726 | orchestrator | Wednesday 19 February 2025 09:19:13 +0000 (0:00:03.643) 0:00:31.338 **** 2025-02-19 09:24:07.503741 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-02-19 09:24:07.503757 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-02-19 09:24:07.503773 | orchestrator | 2025-02-19 09:24:07.503789 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-02-19 09:24:07.503805 | orchestrator | Wednesday 19 February 2025 09:19:21 +0000 (0:00:08.109) 0:00:39.448 **** 2025-02-19 09:24:07.503819 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-02-19 09:24:07.503833 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-02-19 09:24:07.503847 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-02-19 09:24:07.503861 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-02-19 09:24:07.503874 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-02-19 09:24:07.503896 | orchestrator | 2025-02-19 09:24:07.503911 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-19 09:24:07.503924 | orchestrator | Wednesday 19 February 2025 09:19:39 +0000 (0:00:17.170) 0:00:56.619 **** 2025-02-19 09:24:07.503938 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:24:07.503952 | orchestrator | 2025-02-19 09:24:07.503966 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-02-19 09:24:07.503994 | orchestrator | Wednesday 19 February 2025 09:19:40 +0000 (0:00:01.010) 0:00:57.629 **** 2025-02-19 09:24:07.504008 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.504023 | orchestrator | 2025-02-19 09:24:07.504037 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-02-19 09:24:07.504050 | orchestrator | Wednesday 19 February 2025 09:19:44 +0000 (0:00:04.736) 0:01:02.365 **** 2025-02-19 09:24:07.504064 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.504079 | orchestrator | 2025-02-19 09:24:07.504093 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-02-19 09:24:07.504115 | orchestrator | Wednesday 19 February 2025 09:19:48 +0000 (0:00:04.081) 0:01:06.447 **** 2025-02-19 09:24:07.504130 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:24:07.504144 | orchestrator | 2025-02-19 09:24:07.504158 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-02-19 09:24:07.504172 | orchestrator | Wednesday 19 February 2025 09:19:52 +0000 (0:00:03.976) 0:01:10.423 **** 2025-02-19 09:24:07.504186 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-02-19 09:24:07.504200 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-02-19 09:24:07.504214 | orchestrator | 2025-02-19 09:24:07.504228 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-02-19 09:24:07.504242 | orchestrator | Wednesday 19 February 2025 09:20:03 +0000 (0:00:10.880) 0:01:21.303 **** 2025-02-19 09:24:07.504256 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-02-19 09:24:07.504271 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-02-19 09:24:07.504286 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-02-19 09:24:07.504301 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-02-19 09:24:07.504315 | orchestrator | 2025-02-19 09:24:07.504329 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-02-19 09:24:07.504343 | orchestrator | Wednesday 19 February 2025 09:20:20 +0000 (0:00:17.175) 0:01:38.478 **** 2025-02-19 09:24:07.504357 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.504371 | orchestrator | 2025-02-19 09:24:07.504385 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-02-19 09:24:07.504459 | orchestrator | Wednesday 19 February 2025 09:20:26 +0000 (0:00:05.391) 0:01:43.869 **** 2025-02-19 09:24:07.504476 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.504490 | orchestrator | 2025-02-19 09:24:07.504505 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-02-19 09:24:07.504518 | orchestrator | Wednesday 19 February 2025 09:20:33 +0000 (0:00:06.979) 0:01:50.849 **** 2025-02-19 09:24:07.504532 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:24:07.504546 | orchestrator | 2025-02-19 09:24:07.504560 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-02-19 09:24:07.504574 | orchestrator | Wednesday 19 February 2025 09:20:33 +0000 (0:00:00.237) 0:01:51.087 **** 2025-02-19 09:24:07.504588 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.504601 | orchestrator | 2025-02-19 09:24:07.504615 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-19 09:24:07.504637 | orchestrator | Wednesday 19 February 2025 09:20:39 +0000 (0:00:05.731) 0:01:56.818 **** 2025-02-19 09:24:07.504652 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:24:07.504666 | orchestrator | 2025-02-19 09:24:07.504680 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-02-19 09:24:07.504694 | orchestrator | Wednesday 19 February 2025 09:20:40 +0000 (0:00:01.300) 0:01:58.119 **** 2025-02-19 09:24:07.504708 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.504721 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.504735 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.504749 | orchestrator | 2025-02-19 09:24:07.504763 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-02-19 09:24:07.504777 | orchestrator | Wednesday 19 February 2025 09:20:46 +0000 (0:00:05.700) 0:02:03.819 **** 2025-02-19 09:24:07.504791 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.504805 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.504819 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.504833 | orchestrator | 2025-02-19 09:24:07.504847 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-02-19 09:24:07.504861 | orchestrator | Wednesday 19 February 2025 09:20:52 +0000 (0:00:05.691) 0:02:09.511 **** 2025-02-19 09:24:07.504874 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.504888 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.504902 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.504916 | orchestrator | 2025-02-19 09:24:07.504930 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-02-19 09:24:07.504945 | orchestrator | Wednesday 19 February 2025 09:20:53 +0000 (0:00:01.156) 0:02:10.667 **** 2025-02-19 09:24:07.504959 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:24:07.504973 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:24:07.504985 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:24:07.504997 | orchestrator | 2025-02-19 09:24:07.505010 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-02-19 09:24:07.505022 | orchestrator | Wednesday 19 February 2025 09:20:55 +0000 (0:00:02.040) 0:02:12.708 **** 2025-02-19 09:24:07.505034 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.505047 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.505059 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.505071 | orchestrator | 2025-02-19 09:24:07.505084 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-02-19 09:24:07.505096 | orchestrator | Wednesday 19 February 2025 09:20:56 +0000 (0:00:01.504) 0:02:14.212 **** 2025-02-19 09:24:07.505108 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.505120 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.505133 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.505145 | orchestrator | 2025-02-19 09:24:07.505157 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-02-19 09:24:07.505176 | orchestrator | Wednesday 19 February 2025 09:20:57 +0000 (0:00:01.145) 0:02:15.357 **** 2025-02-19 09:24:07.505189 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.505201 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.505214 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.505226 | orchestrator | 2025-02-19 09:24:07.505291 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-02-19 09:24:07.505307 | orchestrator | Wednesday 19 February 2025 09:21:00 +0000 (0:00:02.153) 0:02:17.511 **** 2025-02-19 09:24:07.505320 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.505333 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.505345 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.505357 | orchestrator | 2025-02-19 09:24:07.505370 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-02-19 09:24:07.505382 | orchestrator | Wednesday 19 February 2025 09:21:01 +0000 (0:00:01.746) 0:02:19.257 **** 2025-02-19 09:24:07.505423 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:24:07.505436 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:24:07.505449 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:24:07.505462 | orchestrator | 2025-02-19 09:24:07.505474 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-02-19 09:24:07.505487 | orchestrator | Wednesday 19 February 2025 09:21:02 +0000 (0:00:00.936) 0:02:20.194 **** 2025-02-19 09:24:07.505500 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:24:07.505521 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:24:07.505535 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:24:07.505548 | orchestrator | 2025-02-19 09:24:07.505560 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-19 09:24:07.505573 | orchestrator | Wednesday 19 February 2025 09:21:06 +0000 (0:00:04.066) 0:02:24.261 **** 2025-02-19 09:24:07.505585 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:24:07.505598 | orchestrator | 2025-02-19 09:24:07.505610 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-02-19 09:24:07.505622 | orchestrator | Wednesday 19 February 2025 09:21:07 +0000 (0:00:00.817) 0:02:25.078 **** 2025-02-19 09:24:07.505635 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:24:07.505647 | orchestrator | 2025-02-19 09:24:07.505660 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-02-19 09:24:07.505672 | orchestrator | Wednesday 19 February 2025 09:21:12 +0000 (0:00:04.580) 0:02:29.658 **** 2025-02-19 09:24:07.505684 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:24:07.505697 | orchestrator | 2025-02-19 09:24:07.505709 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-02-19 09:24:07.505721 | orchestrator | Wednesday 19 February 2025 09:21:16 +0000 (0:00:03.892) 0:02:33.551 **** 2025-02-19 09:24:07.505737 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-02-19 09:24:07.505750 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-02-19 09:24:07.505762 | orchestrator | 2025-02-19 09:24:07.505776 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-02-19 09:24:07.505788 | orchestrator | Wednesday 19 February 2025 09:21:23 +0000 (0:00:07.077) 0:02:40.629 **** 2025-02-19 09:24:07.505800 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:24:07.505813 | orchestrator | 2025-02-19 09:24:07.505825 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-02-19 09:24:07.505837 | orchestrator | Wednesday 19 February 2025 09:21:27 +0000 (0:00:04.385) 0:02:45.014 **** 2025-02-19 09:24:07.505850 | orchestrator | ok: [testbed-node-0] 2025-02-19 09:24:07.505862 | orchestrator | ok: [testbed-node-1] 2025-02-19 09:24:07.505875 | orchestrator | ok: [testbed-node-2] 2025-02-19 09:24:07.505887 | orchestrator | 2025-02-19 09:24:07.505900 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-02-19 09:24:07.505912 | orchestrator | Wednesday 19 February 2025 09:21:28 +0000 (0:00:00.521) 0:02:45.536 **** 2025-02-19 09:24:07.505927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.505979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.506002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.506054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.506071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.506084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.506098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.506112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.506166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.506183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.506198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.506211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.506224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.506237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.506256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.506269 | orchestrator | 2025-02-19 09:24:07.506281 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-02-19 09:24:07.506294 | orchestrator | Wednesday 19 February 2025 09:21:31 +0000 (0:00:03.515) 0:02:49.052 **** 2025-02-19 09:24:07.506306 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:24:07.506319 | orchestrator | 2025-02-19 09:24:07.506358 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-02-19 09:24:07.506374 | orchestrator | Wednesday 19 February 2025 09:21:31 +0000 (0:00:00.134) 0:02:49.186 **** 2025-02-19 09:24:07.506386 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:24:07.506417 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:24:07.506430 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:24:07.506442 | orchestrator | 2025-02-19 09:24:07.506455 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-02-19 09:24:07.506467 | orchestrator | Wednesday 19 February 2025 09:21:32 +0000 (0:00:00.524) 0:02:49.711 **** 2025-02-19 09:24:07.506480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:24:07.506493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:24:07.506506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.506520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.506539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:24:07.506552 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:24:07.506601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:24:07.506617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:24:07.506630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.506643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.506656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:24:07.506675 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:24:07.506688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:24:07.506726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:24:07.506742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.506755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.506768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:24:07.506781 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:24:07.506793 | orchestrator | 2025-02-19 09:24:07.506806 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-19 09:24:07.506825 | orchestrator | Wednesday 19 February 2025 09:21:33 +0000 (0:00:01.275) 0:02:50.987 **** 2025-02-19 09:24:07.506838 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-19 09:24:07.506850 | orchestrator | 2025-02-19 09:24:07.506863 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-02-19 09:24:07.506875 | orchestrator | Wednesday 19 February 2025 09:21:34 +0000 (0:00:01.113) 0:02:52.100 **** 2025-02-19 09:24:07.506888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.506928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.506943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.506957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.506970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.506995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.507008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507162 | orchestrator | 2025-02-19 09:24:07.507174 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-02-19 09:24:07.507187 | orchestrator | Wednesday 19 February 2025 09:21:39 +0000 (0:00:05.228) 0:02:57.329 **** 2025-02-19 09:24:07.507207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:24:07.507220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:24:07.507239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:24:07.507277 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:24:07.507298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:24:07.507311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:24:07.507324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:24:07.507369 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:24:07.507382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:24:07.507395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:24:07.507434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:24:07.507480 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:24:07.507492 | orchestrator | 2025-02-19 09:24:07.507505 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-02-19 09:24:07.507517 | orchestrator | Wednesday 19 February 2025 09:21:40 +0000 (0:00:01.026) 0:02:58.355 **** 2025-02-19 09:24:07.507530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:24:07.507543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:24:07.507555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:24:07.507606 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:24:07.507620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:24:07.507632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:24:07.507645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:24:07.507689 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:24:07.507702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-19 09:24:07.507721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-19 09:24:07.507734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-19 09:24:07.507759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-19 09:24:07.507772 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:24:07.507785 | orchestrator | 2025-02-19 09:24:07.507797 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-02-19 09:24:07.507810 | orchestrator | Wednesday 19 February 2025 09:21:42 +0000 (0:00:01.519) 0:02:59.874 **** 2025-02-19 09:24:07.507830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.507849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.507863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.507876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.507889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.507901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.507919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.507990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508054 | orchestrator | 2025-02-19 09:24:07.508066 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-02-19 09:24:07.508079 | orchestrator | Wednesday 19 February 2025 09:21:47 +0000 (0:00:05.623) 0:03:05.498 **** 2025-02-19 09:24:07.508100 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-02-19 09:24:07.508118 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-02-19 09:24:07.508131 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-02-19 09:24:07.508143 | orchestrator | 2025-02-19 09:24:07.508156 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-02-19 09:24:07.508168 | orchestrator | Wednesday 19 February 2025 09:21:50 +0000 (0:00:02.591) 0:03:08.089 **** 2025-02-19 09:24:07.508181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.508194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.508220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.508233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.508246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.508259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.508273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.508430 | orchestrator | 2025-02-19 09:24:07.508443 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-02-19 09:24:07.508456 | orchestrator | Wednesday 19 February 2025 09:22:13 +0000 (0:00:23.153) 0:03:31.243 **** 2025-02-19 09:24:07.508468 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.508481 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.508493 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.508506 | orchestrator | 2025-02-19 09:24:07.508518 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-02-19 09:24:07.508530 | orchestrator | Wednesday 19 February 2025 09:22:15 +0000 (0:00:02.067) 0:03:33.310 **** 2025-02-19 09:24:07.508543 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-02-19 09:24:07.508555 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-02-19 09:24:07.508567 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-02-19 09:24:07.508579 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-02-19 09:24:07.508592 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-02-19 09:24:07.508604 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-02-19 09:24:07.508616 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-02-19 09:24:07.508629 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-02-19 09:24:07.508646 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-02-19 09:24:07.508659 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-02-19 09:24:07.508671 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-02-19 09:24:07.508684 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-02-19 09:24:07.508696 | orchestrator | 2025-02-19 09:24:07.508708 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-02-19 09:24:07.508720 | orchestrator | Wednesday 19 February 2025 09:22:26 +0000 (0:00:10.668) 0:03:43.979 **** 2025-02-19 09:24:07.508732 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-02-19 09:24:07.508744 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-02-19 09:24:07.508757 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-02-19 09:24:07.508769 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-02-19 09:24:07.508781 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-02-19 09:24:07.508794 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-02-19 09:24:07.508806 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-02-19 09:24:07.508818 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-02-19 09:24:07.508830 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-02-19 09:24:07.508842 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-02-19 09:24:07.508854 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-02-19 09:24:07.508866 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-02-19 09:24:07.508878 | orchestrator | 2025-02-19 09:24:07.508891 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-02-19 09:24:07.508903 | orchestrator | Wednesday 19 February 2025 09:22:34 +0000 (0:00:07.839) 0:03:51.818 **** 2025-02-19 09:24:07.508921 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-02-19 09:24:07.508934 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-02-19 09:24:07.508946 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-02-19 09:24:07.508959 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-02-19 09:24:07.508971 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-02-19 09:24:07.508983 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-02-19 09:24:07.509002 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-02-19 09:24:07.509015 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-02-19 09:24:07.509027 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-02-19 09:24:07.509040 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-02-19 09:24:07.509052 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-02-19 09:24:07.509064 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-02-19 09:24:07.509077 | orchestrator | 2025-02-19 09:24:07.509089 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-02-19 09:24:07.509101 | orchestrator | Wednesday 19 February 2025 09:22:42 +0000 (0:00:07.846) 0:03:59.664 **** 2025-02-19 09:24:07.509114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.509133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.509147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-19 09:24:07.509160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.509182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.509195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-02-19 09:24:07.509208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.509221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.509239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.509252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.509265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.509283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-02-19 09:24:07.509296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.509309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.509322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-02-19 09:24:07.509335 | orchestrator | 2025-02-19 09:24:07.509353 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-19 09:24:07.509370 | orchestrator | Wednesday 19 February 2025 09:22:48 +0000 (0:00:06.260) 0:04:05.925 **** 2025-02-19 09:24:07.509382 | orchestrator | skipping: [testbed-node-0] 2025-02-19 09:24:07.509439 | orchestrator | skipping: [testbed-node-1] 2025-02-19 09:24:07.509456 | orchestrator | skipping: [testbed-node-2] 2025-02-19 09:24:07.509468 | orchestrator | 2025-02-19 09:24:07.509486 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-02-19 09:24:07.509499 | orchestrator | Wednesday 19 February 2025 09:22:48 +0000 (0:00:00.530) 0:04:06.456 **** 2025-02-19 09:24:07.509512 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.509524 | orchestrator | 2025-02-19 09:24:07.509538 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-02-19 09:24:07.509554 | orchestrator | Wednesday 19 February 2025 09:22:51 +0000 (0:00:02.527) 0:04:08.984 **** 2025-02-19 09:24:07.509565 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.509575 | orchestrator | 2025-02-19 09:24:07.509585 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-02-19 09:24:07.509595 | orchestrator | Wednesday 19 February 2025 09:22:54 +0000 (0:00:02.766) 0:04:11.751 **** 2025-02-19 09:24:07.509605 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.509621 | orchestrator | 2025-02-19 09:24:07.509632 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-02-19 09:24:07.509647 | orchestrator | Wednesday 19 February 2025 09:22:57 +0000 (0:00:02.795) 0:04:14.546 **** 2025-02-19 09:24:07.509658 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.509668 | orchestrator | 2025-02-19 09:24:07.509678 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-02-19 09:24:07.509688 | orchestrator | Wednesday 19 February 2025 09:22:59 +0000 (0:00:02.827) 0:04:17.373 **** 2025-02-19 09:24:07.509698 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.509708 | orchestrator | 2025-02-19 09:24:07.509722 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-02-19 09:24:07.509733 | orchestrator | Wednesday 19 February 2025 09:23:20 +0000 (0:00:20.940) 0:04:38.314 **** 2025-02-19 09:24:07.509742 | orchestrator | 2025-02-19 09:24:07.509752 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-02-19 09:24:07.509762 | orchestrator | Wednesday 19 February 2025 09:23:20 +0000 (0:00:00.066) 0:04:38.380 **** 2025-02-19 09:24:07.509773 | orchestrator | 2025-02-19 09:24:07.509783 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-02-19 09:24:07.509793 | orchestrator | Wednesday 19 February 2025 09:23:20 +0000 (0:00:00.067) 0:04:38.447 **** 2025-02-19 09:24:07.509803 | orchestrator | 2025-02-19 09:24:07.509813 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-02-19 09:24:07.509823 | orchestrator | Wednesday 19 February 2025 09:23:21 +0000 (0:00:00.082) 0:04:38.530 **** 2025-02-19 09:24:07.509833 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.509843 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.509853 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.509863 | orchestrator | 2025-02-19 09:24:07.509873 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-02-19 09:24:07.509882 | orchestrator | Wednesday 19 February 2025 09:23:36 +0000 (0:00:15.040) 0:04:53.571 **** 2025-02-19 09:24:07.509892 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.509902 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.509912 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.509922 | orchestrator | 2025-02-19 09:24:07.509932 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-02-19 09:24:07.509942 | orchestrator | Wednesday 19 February 2025 09:23:44 +0000 (0:00:08.423) 0:05:01.994 **** 2025-02-19 09:24:07.509952 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.509962 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.509972 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.509982 | orchestrator | 2025-02-19 09:24:07.509992 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-02-19 09:24:07.510002 | orchestrator | Wednesday 19 February 2025 09:23:51 +0000 (0:00:06.709) 0:05:08.704 **** 2025-02-19 09:24:07.510012 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.510058 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.510068 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.510079 | orchestrator | 2025-02-19 09:24:07.510089 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-02-19 09:24:07.510099 | orchestrator | Wednesday 19 February 2025 09:23:57 +0000 (0:00:06.493) 0:05:15.197 **** 2025-02-19 09:24:07.510109 | orchestrator | changed: [testbed-node-0] 2025-02-19 09:24:07.510119 | orchestrator | changed: [testbed-node-2] 2025-02-19 09:24:07.510129 | orchestrator | changed: [testbed-node-1] 2025-02-19 09:24:07.510139 | orchestrator | 2025-02-19 09:24:07.510149 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:24:07.510160 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-02-19 09:24:07.510174 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 09:24:07.510190 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-19 09:24:07.510200 | orchestrator | 2025-02-19 09:24:07.510210 | orchestrator | 2025-02-19 09:24:07.510220 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:24:07.510230 | orchestrator | Wednesday 19 February 2025 09:24:04 +0000 (0:00:07.053) 0:05:22.251 **** 2025-02-19 09:24:07.510240 | orchestrator | =============================================================================== 2025-02-19 09:24:07.510250 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 23.15s 2025-02-19 09:24:07.510266 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.94s 2025-02-19 09:24:07.510279 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.18s 2025-02-19 09:24:07.510290 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.17s 2025-02-19 09:24:07.510300 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.04s 2025-02-19 09:24:07.510315 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.88s 2025-02-19 09:24:10.547147 | orchestrator | octavia : Copying certificate files for octavia-worker ----------------- 10.67s 2025-02-19 09:24:10.547243 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.59s 2025-02-19 09:24:10.547252 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.42s 2025-02-19 09:24:10.547260 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.11s 2025-02-19 09:24:10.547266 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 7.85s 2025-02-19 09:24:10.547273 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 7.84s 2025-02-19 09:24:10.547292 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.08s 2025-02-19 09:24:10.547298 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 7.05s 2025-02-19 09:24:10.547305 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.98s 2025-02-19 09:24:10.547311 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 6.71s 2025-02-19 09:24:10.547317 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.66s 2025-02-19 09:24:10.547323 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 6.49s 2025-02-19 09:24:10.547329 | orchestrator | octavia : Check octavia containers -------------------------------------- 6.26s 2025-02-19 09:24:10.547335 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.73s 2025-02-19 09:24:10.547342 | orchestrator | 2025-02-19 09:24:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:10.547360 | orchestrator | 2025-02-19 09:24:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:13.589057 | orchestrator | 2025-02-19 09:24:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:16.632852 | orchestrator | 2025-02-19 09:24:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:19.665488 | orchestrator | 2025-02-19 09:24:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:22.690966 | orchestrator | 2025-02-19 09:24:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:25.724524 | orchestrator | 2025-02-19 09:24:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:28.756596 | orchestrator | 2025-02-19 09:24:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:31.793748 | orchestrator | 2025-02-19 09:24:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:34.834293 | orchestrator | 2025-02-19 09:24:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:37.875336 | orchestrator | 2025-02-19 09:24:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:40.911328 | orchestrator | 2025-02-19 09:24:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:43.941968 | orchestrator | 2025-02-19 09:24:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:46.973666 | orchestrator | 2025-02-19 09:24:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:50.022661 | orchestrator | 2025-02-19 09:24:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:53.063976 | orchestrator | 2025-02-19 09:24:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:56.100835 | orchestrator | 2025-02-19 09:24:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:24:59.141365 | orchestrator | 2025-02-19 09:24:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:25:02.175599 | orchestrator | 2025-02-19 09:25:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:25:05.218935 | orchestrator | 2025-02-19 09:25:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-02-19 09:25:08.255135 | orchestrator | 2025-02-19 09:25:08.538584 | orchestrator | 2025-02-19 09:25:08.545277 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Feb 19 09:25:08 UTC 2025 2025-02-19 09:25:08.549020 | orchestrator | 2025-02-19 09:25:19.172644 | orchestrator | changed 2025-02-19 09:25:19.488146 | 2025-02-19 09:25:19.488288 | TASK [Bootstrap services] 2025-02-19 09:25:20.170933 | orchestrator | 2025-02-19 09:25:20.179267 | orchestrator | # BOOTSTRAP 2025-02-19 09:25:20.179383 | orchestrator | 2025-02-19 09:25:20.179402 | orchestrator | + set -e 2025-02-19 09:25:20.179477 | orchestrator | + echo 2025-02-19 09:25:20.179496 | orchestrator | + echo '# BOOTSTRAP' 2025-02-19 09:25:20.179512 | orchestrator | + echo 2025-02-19 09:25:20.179535 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-02-19 09:25:20.179575 | orchestrator | + set -e 2025-02-19 09:25:26.742151 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-02-19 09:25:26.742323 | orchestrator | 2025-02-19 09:25:26 | INFO  | Flavor SCS-1V-4 created 2025-02-19 09:25:26.963961 | orchestrator | 2025-02-19 09:25:26 | INFO  | Flavor SCS-2V-8 created 2025-02-19 09:25:27.171474 | orchestrator | 2025-02-19 09:25:27 | INFO  | Flavor SCS-4V-16 created 2025-02-19 09:25:27.331573 | orchestrator | 2025-02-19 09:25:27 | INFO  | Flavor SCS-8V-32 created 2025-02-19 09:25:27.443592 | orchestrator | 2025-02-19 09:25:27 | INFO  | Flavor SCS-1V-2 created 2025-02-19 09:25:27.561482 | orchestrator | 2025-02-19 09:25:27 | INFO  | Flavor SCS-2V-4 created 2025-02-19 09:25:27.692986 | orchestrator | 2025-02-19 09:25:27 | INFO  | Flavor SCS-4V-8 created 2025-02-19 09:25:27.814317 | orchestrator | 2025-02-19 09:25:27 | INFO  | Flavor SCS-8V-16 created 2025-02-19 09:25:27.959964 | orchestrator | 2025-02-19 09:25:27 | INFO  | Flavor SCS-16V-32 created 2025-02-19 09:25:28.104500 | orchestrator | 2025-02-19 09:25:28 | INFO  | Flavor SCS-1V-8 created 2025-02-19 09:25:28.210656 | orchestrator | 2025-02-19 09:25:28 | INFO  | Flavor SCS-2V-16 created 2025-02-19 09:25:28.351849 | orchestrator | 2025-02-19 09:25:28 | INFO  | Flavor SCS-4V-32 created 2025-02-19 09:25:28.500380 | orchestrator | 2025-02-19 09:25:28 | INFO  | Flavor SCS-1L-1 created 2025-02-19 09:25:28.622104 | orchestrator | 2025-02-19 09:25:28 | INFO  | Flavor SCS-2V-4-20s created 2025-02-19 09:25:28.770302 | orchestrator | 2025-02-19 09:25:28 | INFO  | Flavor SCS-4V-16-100s created 2025-02-19 09:25:28.903175 | orchestrator | 2025-02-19 09:25:28 | INFO  | Flavor SCS-1V-4-10 created 2025-02-19 09:25:29.041289 | orchestrator | 2025-02-19 09:25:29 | INFO  | Flavor SCS-2V-8-20 created 2025-02-19 09:25:29.157371 | orchestrator | 2025-02-19 09:25:29 | INFO  | Flavor SCS-4V-16-50 created 2025-02-19 09:25:29.276897 | orchestrator | 2025-02-19 09:25:29 | INFO  | Flavor SCS-8V-32-100 created 2025-02-19 09:25:29.389002 | orchestrator | 2025-02-19 09:25:29 | INFO  | Flavor SCS-1V-2-5 created 2025-02-19 09:25:29.510652 | orchestrator | 2025-02-19 09:25:29 | INFO  | Flavor SCS-2V-4-10 created 2025-02-19 09:25:29.643764 | orchestrator | 2025-02-19 09:25:29 | INFO  | Flavor SCS-4V-8-20 created 2025-02-19 09:25:29.757940 | orchestrator | 2025-02-19 09:25:29 | INFO  | Flavor SCS-8V-16-50 created 2025-02-19 09:25:29.902143 | orchestrator | 2025-02-19 09:25:29 | INFO  | Flavor SCS-16V-32-100 created 2025-02-19 09:25:30.058095 | orchestrator | 2025-02-19 09:25:30 | INFO  | Flavor SCS-1V-8-20 created 2025-02-19 09:25:30.189677 | orchestrator | 2025-02-19 09:25:30 | INFO  | Flavor SCS-2V-16-50 created 2025-02-19 09:25:30.316532 | orchestrator | 2025-02-19 09:25:30 | INFO  | Flavor SCS-4V-32-100 created 2025-02-19 09:25:30.447051 | orchestrator | 2025-02-19 09:25:30 | INFO  | Flavor SCS-1L-1-5 created 2025-02-19 09:25:33.254157 | orchestrator | 2025-02-19 09:25:33 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-02-19 09:25:33.362593 | orchestrator | 2025-02-19 09:25:33 | INFO  | Task d09c418a-cf9b-4e59-b7ca-4db9602f451e (bootstrap-basic) was prepared for execution. 2025-02-19 09:25:37.692251 | orchestrator | 2025-02-19 09:25:33 | INFO  | It takes a moment until task d09c418a-cf9b-4e59-b7ca-4db9602f451e (bootstrap-basic) has been started and output is visible here. 2025-02-19 09:25:37.692500 | orchestrator | 2025-02-19 09:25:37.692823 | orchestrator | PLAY [Prepare masquerading on the manager node] ******************************** 2025-02-19 09:25:37.692932 | orchestrator | 2025-02-19 09:25:37.692969 | orchestrator | TASK [Accept FORWARD on the management interface (incoming)] ******************* 2025-02-19 09:25:37.693249 | orchestrator | Wednesday 19 February 2025 09:25:37 +0000 (0:00:00.173) 0:00:00.173 **** 2025-02-19 09:25:38.369314 | orchestrator | ok: [testbed-manager] 2025-02-19 09:25:38.888595 | orchestrator | 2025-02-19 09:25:38.888718 | orchestrator | TASK [Accept FORWARD on the management interface (outgoing)] ******************* 2025-02-19 09:25:38.888738 | orchestrator | Wednesday 19 February 2025 09:25:38 +0000 (0:00:00.677) 0:00:00.850 **** 2025-02-19 09:25:38.888765 | orchestrator | ok: [testbed-manager] 2025-02-19 09:25:38.889113 | orchestrator | 2025-02-19 09:25:38.889922 | orchestrator | TASK [Masquerade traffic on the management interface] ************************** 2025-02-19 09:25:38.890288 | orchestrator | Wednesday 19 February 2025 09:25:38 +0000 (0:00:00.520) 0:00:01.371 **** 2025-02-19 09:25:39.318326 | orchestrator | ok: [testbed-manager] 2025-02-19 09:25:39.318645 | orchestrator | 2025-02-19 09:25:39.318773 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-02-19 09:25:39.318798 | orchestrator | 2025-02-19 09:25:39.319321 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-19 09:25:39.319501 | orchestrator | Wednesday 19 February 2025 09:25:39 +0000 (0:00:00.431) 0:00:01.802 **** 2025-02-19 09:25:40.791116 | orchestrator | ok: [localhost] 2025-02-19 09:25:40.791686 | orchestrator | 2025-02-19 09:25:40.791821 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-02-19 09:25:40.792198 | orchestrator | Wednesday 19 February 2025 09:25:40 +0000 (0:00:01.470) 0:00:03.273 **** 2025-02-19 09:25:49.109225 | orchestrator | ok: [localhost] 2025-02-19 09:25:49.109516 | orchestrator | 2025-02-19 09:25:49.109558 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-02-19 09:25:49.110126 | orchestrator | Wednesday 19 February 2025 09:25:49 +0000 (0:00:08.319) 0:00:11.592 **** 2025-02-19 09:25:55.626644 | orchestrator | changed: [localhost] 2025-02-19 09:25:55.626893 | orchestrator | 2025-02-19 09:25:55.626922 | orchestrator | TASK [Get volume type local] *************************************************** 2025-02-19 09:25:55.626945 | orchestrator | Wednesday 19 February 2025 09:25:55 +0000 (0:00:06.517) 0:00:18.109 **** 2025-02-19 09:26:01.933461 | orchestrator | ok: [localhost] 2025-02-19 09:26:01.933617 | orchestrator | 2025-02-19 09:26:01.933636 | orchestrator | TASK [Create volume type local] ************************************************ 2025-02-19 09:26:01.933654 | orchestrator | Wednesday 19 February 2025 09:26:01 +0000 (0:00:06.306) 0:00:24.416 **** 2025-02-19 09:26:09.413787 | orchestrator | changed: [localhost] 2025-02-19 09:26:09.413938 | orchestrator | 2025-02-19 09:26:09.413957 | orchestrator | TASK [Create public network] *************************************************** 2025-02-19 09:26:14.669575 | orchestrator | Wednesday 19 February 2025 09:26:09 +0000 (0:00:07.479) 0:00:31.895 **** 2025-02-19 09:26:14.669714 | orchestrator | changed: [localhost] 2025-02-19 09:26:14.670622 | orchestrator | 2025-02-19 09:26:14.670663 | orchestrator | TASK [Set public network to default] ******************************************* 2025-02-19 09:26:14.672214 | orchestrator | Wednesday 19 February 2025 09:26:14 +0000 (0:00:05.256) 0:00:37.152 **** 2025-02-19 09:26:20.159815 | orchestrator | changed: [localhost] 2025-02-19 09:26:20.159981 | orchestrator | 2025-02-19 09:26:20.159999 | orchestrator | TASK [Create public subnet] **************************************************** 2025-02-19 09:26:20.160012 | orchestrator | Wednesday 19 February 2025 09:26:20 +0000 (0:00:05.487) 0:00:42.639 **** 2025-02-19 09:26:24.732953 | orchestrator | changed: [localhost] 2025-02-19 09:26:24.733358 | orchestrator | 2025-02-19 09:26:24.733510 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-02-19 09:26:24.733556 | orchestrator | Wednesday 19 February 2025 09:26:24 +0000 (0:00:04.575) 0:00:47.215 **** 2025-02-19 09:26:28.584773 | orchestrator | changed: [localhost] 2025-02-19 09:26:32.209178 | orchestrator | 2025-02-19 09:26:32.209334 | orchestrator | TASK [Create manager role] ***************************************************** 2025-02-19 09:26:32.209356 | orchestrator | Wednesday 19 February 2025 09:26:28 +0000 (0:00:03.850) 0:00:51.065 **** 2025-02-19 09:26:32.209387 | orchestrator | ok: [localhost] 2025-02-19 09:26:32.209670 | orchestrator | 2025-02-19 09:26:32.209967 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-19 09:26:32.209999 | orchestrator | 2025-02-19 09:26:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-19 09:26:32.210244 | orchestrator | 2025-02-19 09:26:32 | INFO  | Please wait and do not abort execution. 2025-02-19 09:26:32.212855 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:26:32.213602 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-19 09:26:32.214412 | orchestrator | 2025-02-19 09:26:32.214742 | orchestrator | 2025-02-19 09:26:32.215375 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-19 09:26:32.215922 | orchestrator | Wednesday 19 February 2025 09:26:32 +0000 (0:00:03.625) 0:00:54.691 **** 2025-02-19 09:26:32.216815 | orchestrator | =============================================================================== 2025-02-19 09:26:32.217409 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.32s 2025-02-19 09:26:32.218271 | orchestrator | Create volume type local ------------------------------------------------ 7.48s 2025-02-19 09:26:32.218853 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.52s 2025-02-19 09:26:32.219376 | orchestrator | Get volume type local --------------------------------------------------- 6.31s 2025-02-19 09:26:32.219960 | orchestrator | Set public network to default ------------------------------------------- 5.49s 2025-02-19 09:26:32.220544 | orchestrator | Create public network --------------------------------------------------- 5.26s 2025-02-19 09:26:32.220944 | orchestrator | Create public subnet ---------------------------------------------------- 4.58s 2025-02-19 09:26:32.221566 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.85s 2025-02-19 09:26:32.222406 | orchestrator | Create manager role ----------------------------------------------------- 3.63s 2025-02-19 09:26:32.223086 | orchestrator | Gathering Facts --------------------------------------------------------- 1.47s 2025-02-19 09:26:32.223795 | orchestrator | Accept FORWARD on the management interface (incoming) ------------------- 0.68s 2025-02-19 09:26:32.224082 | orchestrator | Accept FORWARD on the management interface (outgoing) ------------------- 0.52s 2025-02-19 09:26:32.224762 | orchestrator | Masquerade traffic on the management interface -------------------------- 0.43s 2025-02-19 09:26:37.193198 | orchestrator | Failed to contact the endpoint at https://api.testbed.osism.xyz:9292 for discovery. Fallback to using that endpoint as the base url. 2025-02-19 09:26:37.198523 | orchestrator | Failed to contact the endpoint at https://api.testbed.osism.xyz:9292 for discovery. Fallback to using that endpoint as the base url. 2025-02-19 09:26:37.801057 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-02-19 09:26:37.801876 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack_image_manager/main.py:126 │ 2025-02-19 09:26:37.801910 | orchestrator | │ in create_cli_args │ 2025-02-19 09:26:37.801924 | orchestrator | │ │ 2025-02-19 09:26:37.801938 | orchestrator | │ 123 │ │ logger.add(sys.stderr, format=log_fmt, level=level, colorize= │ 2025-02-19 09:26:37.801961 | orchestrator | │ 124 │ │ │ 2025-02-19 09:26:37.801975 | orchestrator | │ 125 │ │ if __name__ == "__main__" or __name__ == "openstack_image_man │ 2025-02-19 09:26:37.802005 | orchestrator | │ ❱ 126 │ │ │ self.main() │ 2025-02-19 09:26:37.802088 | orchestrator | │ 127 │ │ 2025-02-19 09:26:37.802105 | orchestrator | │ 128 │ def read_image_files(self, return_all_images=False) -> list: │ 2025-02-19 09:26:37.802117 | orchestrator | │ 129 │ │ """Read all YAML files in self.CONF.images""" │ 2025-02-19 09:26:37.802130 | orchestrator | │ │ 2025-02-19 09:26:37.802143 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:37.802173 | orchestrator | │ │ check = True │ │ 2025-02-19 09:26:37.802187 | orchestrator | │ │ check_age = False │ │ 2025-02-19 09:26:37.802199 | orchestrator | │ │ check_only = False │ │ 2025-02-19 09:26:37.802212 | orchestrator | │ │ cloud = 'admin' │ │ 2025-02-19 09:26:37.802224 | orchestrator | │ │ deactivate = False │ │ 2025-02-19 09:26:37.802237 | orchestrator | │ │ debug = False │ │ 2025-02-19 09:26:37.802249 | orchestrator | │ │ delete = False │ │ 2025-02-19 09:26:37.802262 | orchestrator | │ │ dry_run = False │ │ 2025-02-19 09:26:37.802274 | orchestrator | │ │ filter = 'Cirros' │ │ 2025-02-19 09:26:37.802289 | orchestrator | │ │ force = False │ │ 2025-02-19 09:26:37.802312 | orchestrator | │ │ hide = True │ │ 2025-02-19 09:26:37.802334 | orchestrator | │ │ images = '/etc/images' │ │ 2025-02-19 09:26:37.802355 | orchestrator | │ │ keep = False │ │ 2025-02-19 09:26:37.802375 | orchestrator | │ │ latest = False │ │ 2025-02-19 09:26:37.802456 | orchestrator | │ │ level = 'INFO' │ │ 2025-02-19 09:26:37.802478 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} │ │ 2025-02-19 09:26:37.802499 | orchestrator | │ │ | {level: <8} | '+17 │ │ 2025-02-19 09:26:37.802520 | orchestrator | │ │ max_age = 90 │ │ 2025-02-19 09:26:37.802541 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:37.802592 | orchestrator | │ │ share_action = 'add' │ │ 2025-02-19 09:26:37.802617 | orchestrator | │ │ share_domain = 'default' │ │ 2025-02-19 09:26:37.802641 | orchestrator | │ │ share_image = None │ │ 2025-02-19 09:26:37.802665 | orchestrator | │ │ share_target = None │ │ 2025-02-19 09:26:37.802691 | orchestrator | │ │ share_type = 'project' │ │ 2025-02-19 09:26:37.802716 | orchestrator | │ │ tag = 'managed_by_osism' │ │ 2025-02-19 09:26:37.802741 | orchestrator | │ │ use_os_hidden = False │ │ 2025-02-19 09:26:37.802768 | orchestrator | │ │ yes_i_really_know_what_i_do = False │ │ 2025-02-19 09:26:37.802806 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:37.802848 | orchestrator | │ │ 2025-02-19 09:26:37.802864 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack_image_manager/main.py:253 │ 2025-02-19 09:26:37.802879 | orchestrator | │ in main │ 2025-02-19 09:26:37.802893 | orchestrator | │ │ 2025-02-19 09:26:37.802989 | orchestrator | │ 250 │ │ else: │ 2025-02-19 09:26:37.803010 | orchestrator | │ 251 │ │ │ self.create_connection() │ 2025-02-19 09:26:37.803025 | orchestrator | │ 252 │ │ │ images = self.read_image_files() │ 2025-02-19 09:26:37.803040 | orchestrator | │ ❱ 253 │ │ │ managed_images = self.process_images(images) │ 2025-02-19 09:26:37.803055 | orchestrator | │ 254 │ │ │ │ 2025-02-19 09:26:37.803070 | orchestrator | │ 255 │ │ │ # ignore all non-specified images when using --filter │ 2025-02-19 09:26:37.803086 | orchestrator | │ 256 │ │ │ if self.CONF.filter: │ 2025-02-19 09:26:37.803101 | orchestrator | │ │ 2025-02-19 09:26:37.803120 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:37.803136 | orchestrator | │ │ images = [ │ │ 2025-02-19 09:26:37.803151 | orchestrator | │ │ │ { │ │ 2025-02-19 09:26:37.803166 | orchestrator | │ │ │ │ 'name': 'Cirros', │ │ 2025-02-19 09:26:37.803185 | orchestrator | │ │ │ │ 'enable': True, │ │ 2025-02-19 09:26:37.803253 | orchestrator | │ │ │ │ 'format': 'qcow2', │ │ 2025-02-19 09:26:37.803278 | orchestrator | │ │ │ │ 'login': 'cirros', │ │ 2025-02-19 09:26:37.803302 | orchestrator | │ │ │ │ 'password': 'gocubsgo', │ │ 2025-02-19 09:26:37.803326 | orchestrator | │ │ │ │ 'min_disk': 1, │ │ 2025-02-19 09:26:37.803350 | orchestrator | │ │ │ │ 'min_ram': 32, │ │ 2025-02-19 09:26:37.803374 | orchestrator | │ │ │ │ 'status': 'active', │ │ 2025-02-19 09:26:37.803392 | orchestrator | │ │ │ │ 'visibility': 'public', │ │ 2025-02-19 09:26:37.803406 | orchestrator | │ │ │ │ 'multi': False, │ │ 2025-02-19 09:26:37.803450 | orchestrator | │ │ │ │ ... +3 │ │ 2025-02-19 09:26:37.803468 | orchestrator | │ │ │ } │ │ 2025-02-19 09:26:37.803482 | orchestrator | │ │ ] │ │ 2025-02-19 09:26:37.803497 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:37.803525 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:37.803551 | orchestrator | │ │ 2025-02-19 09:26:37.803565 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack_image_manager/main.py:370 │ 2025-02-19 09:26:37.803579 | orchestrator | │ in process_images │ 2025-02-19 09:26:37.803593 | orchestrator | │ │ 2025-02-19 09:26:37.803608 | orchestrator | │ 367 │ │ │ if "image_name" not in image["meta"]: │ 2025-02-19 09:26:37.803622 | orchestrator | │ 368 │ │ │ │ image["meta"]["image_name"] = image["name"] │ 2025-02-19 09:26:37.803636 | orchestrator | │ 369 │ │ │ │ 2025-02-19 09:26:37.803655 | orchestrator | │ ❱ 370 │ │ │ existing_images, imported_image, previous_image = self.pr │ 2025-02-19 09:26:37.803669 | orchestrator | │ 371 │ │ │ │ image, versions, sorted_versions, image["meta"].copy( │ 2025-02-19 09:26:37.803683 | orchestrator | │ 372 │ │ │ ) │ 2025-02-19 09:26:37.803698 | orchestrator | │ 373 │ │ │ managed_images = managed_images.union(existing_images) │ 2025-02-19 09:26:37.803723 | orchestrator | │ │ 2025-02-19 09:26:37.803739 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:37.803753 | orchestrator | │ │ image = { │ │ 2025-02-19 09:26:37.803767 | orchestrator | │ │ │ 'name': 'Cirros', │ │ 2025-02-19 09:26:37.803781 | orchestrator | │ │ │ 'enable': True, │ │ 2025-02-19 09:26:37.803795 | orchestrator | │ │ │ 'format': 'qcow2', │ │ 2025-02-19 09:26:37.803810 | orchestrator | │ │ │ 'login': 'cirros', │ │ 2025-02-19 09:26:37.803824 | orchestrator | │ │ │ 'password': 'gocubsgo', │ │ 2025-02-19 09:26:37.803837 | orchestrator | │ │ │ 'min_disk': 1, │ │ 2025-02-19 09:26:37.803851 | orchestrator | │ │ │ 'min_ram': 32, │ │ 2025-02-19 09:26:37.803865 | orchestrator | │ │ │ 'status': 'active', │ │ 2025-02-19 09:26:37.803879 | orchestrator | │ │ │ 'visibility': 'public', │ │ 2025-02-19 09:26:37.803893 | orchestrator | │ │ │ 'multi': False, │ │ 2025-02-19 09:26:37.803907 | orchestrator | │ │ │ ... +3 │ │ 2025-02-19 09:26:37.803921 | orchestrator | │ │ } │ │ 2025-02-19 09:26:37.803935 | orchestrator | │ │ images = [ │ │ 2025-02-19 09:26:37.803949 | orchestrator | │ │ │ { │ │ 2025-02-19 09:26:37.803963 | orchestrator | │ │ │ │ 'name': 'Cirros', │ │ 2025-02-19 09:26:37.803977 | orchestrator | │ │ │ │ 'enable': True, │ │ 2025-02-19 09:26:37.804053 | orchestrator | │ │ │ │ 'format': 'qcow2', │ │ 2025-02-19 09:26:37.804071 | orchestrator | │ │ │ │ 'login': 'cirros', │ │ 2025-02-19 09:26:37.804094 | orchestrator | │ │ │ │ 'password': 'gocubsgo', │ │ 2025-02-19 09:26:37.804109 | orchestrator | │ │ │ │ 'min_disk': 1, │ │ 2025-02-19 09:26:37.804123 | orchestrator | │ │ │ │ 'min_ram': 32, │ │ 2025-02-19 09:26:37.804137 | orchestrator | │ │ │ │ 'status': 'active', │ │ 2025-02-19 09:26:37.804152 | orchestrator | │ │ │ │ 'visibility': 'public', │ │ 2025-02-19 09:26:37.804166 | orchestrator | │ │ │ │ 'multi': False, │ │ 2025-02-19 09:26:37.804180 | orchestrator | │ │ │ │ ... +3 │ │ 2025-02-19 09:26:37.804197 | orchestrator | │ │ │ } │ │ 2025-02-19 09:26:37.804212 | orchestrator | │ │ ] │ │ 2025-02-19 09:26:37.804226 | orchestrator | │ │ managed_images = set() │ │ 2025-02-19 09:26:37.804240 | orchestrator | │ │ required_key = 'visibility' │ │ 2025-02-19 09:26:37.804254 | orchestrator | │ │ REQUIRED_KEYS = [ │ │ 2025-02-19 09:26:37.804268 | orchestrator | │ │ │ 'format', │ │ 2025-02-19 09:26:37.804282 | orchestrator | │ │ │ 'name', │ │ 2025-02-19 09:26:37.804296 | orchestrator | │ │ │ 'login', │ │ 2025-02-19 09:26:37.804310 | orchestrator | │ │ │ 'status', │ │ 2025-02-19 09:26:37.804324 | orchestrator | │ │ │ 'versions', │ │ 2025-02-19 09:26:37.804339 | orchestrator | │ │ │ 'visibility' │ │ 2025-02-19 09:26:37.804352 | orchestrator | │ │ ] │ │ 2025-02-19 09:26:37.804367 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:37.804395 | orchestrator | │ │ sorted_versions = ['0.6.2', '0.6.3'] │ │ 2025-02-19 09:26:37.804409 | orchestrator | │ │ url = 'https://github.com/cirros-dev/cirros/releases/downlo… │ │ 2025-02-19 09:26:37.804454 | orchestrator | │ │ version = { │ │ 2025-02-19 09:26:37.804470 | orchestrator | │ │ │ 'version': '0.6.3', │ │ 2025-02-19 09:26:37.804484 | orchestrator | │ │ │ 'url': │ │ 2025-02-19 09:26:37.804499 | orchestrator | │ │ 'https://github.com/cirros-dev/cirros/releases/downlo… │ │ 2025-02-19 09:26:37.804513 | orchestrator | │ │ │ 'checksum': │ │ 2025-02-19 09:26:37.804527 | orchestrator | │ │ 'sha256:7d6355852aeb6dbcd191bcda7cd74f1536cfe5cbf8a10… │ │ 2025-02-19 09:26:37.804541 | orchestrator | │ │ │ 'build_date': datetime.date(2024, 9, 26) │ │ 2025-02-19 09:26:37.804555 | orchestrator | │ │ } │ │ 2025-02-19 09:26:37.804570 | orchestrator | │ │ versions = { │ │ 2025-02-19 09:26:37.804584 | orchestrator | │ │ │ '0.6.2': { │ │ 2025-02-19 09:26:37.804598 | orchestrator | │ │ │ │ 'url': │ │ 2025-02-19 09:26:37.804619 | orchestrator | │ │ 'https://github.com/cirros-dev/cirros/releases/downlo… │ │ 2025-02-19 09:26:37.804633 | orchestrator | │ │ │ │ 'meta': { │ │ 2025-02-19 09:26:37.804647 | orchestrator | │ │ │ │ │ 'image_source': │ │ 2025-02-19 09:26:37.804661 | orchestrator | │ │ 'https://github.com/cirros-dev/cirros/releases/downlo… │ │ 2025-02-19 09:26:37.804675 | orchestrator | │ │ │ │ │ 'image_build_date': '2023-05-30' │ │ 2025-02-19 09:26:37.804693 | orchestrator | │ │ │ │ } │ │ 2025-02-19 09:26:37.804707 | orchestrator | │ │ │ }, │ │ 2025-02-19 09:26:37.804722 | orchestrator | │ │ │ '0.6.3': { │ │ 2025-02-19 09:26:37.804735 | orchestrator | │ │ │ │ 'url': │ │ 2025-02-19 09:26:37.804750 | orchestrator | │ │ 'https://github.com/cirros-dev/cirros/releases/downlo… │ │ 2025-02-19 09:26:37.804763 | orchestrator | │ │ │ │ 'meta': { │ │ 2025-02-19 09:26:37.804778 | orchestrator | │ │ │ │ │ 'image_source': │ │ 2025-02-19 09:26:37.804792 | orchestrator | │ │ 'https://github.com/cirros-dev/cirros/releases/downlo… │ │ 2025-02-19 09:26:37.804806 | orchestrator | │ │ │ │ │ 'image_build_date': '2024-09-26' │ │ 2025-02-19 09:26:37.804820 | orchestrator | │ │ │ │ } │ │ 2025-02-19 09:26:37.804834 | orchestrator | │ │ │ } │ │ 2025-02-19 09:26:37.804848 | orchestrator | │ │ } │ │ 2025-02-19 09:26:37.804863 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:37.804878 | orchestrator | │ │ 2025-02-19 09:26:37.804892 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack_image_manager/main.py:519 │ 2025-02-19 09:26:37.804906 | orchestrator | │ in process_image │ 2025-02-19 09:26:37.804920 | orchestrator | │ │ 2025-02-19 09:26:37.804935 | orchestrator | │ 516 │ │ Returns: │ 2025-02-19 09:26:37.804949 | orchestrator | │ 517 │ │ │ Tuple with (existing_images, imported_image, previous_ima │ 2025-02-19 09:26:37.804963 | orchestrator | │ 518 │ │ """ │ 2025-02-19 09:26:37.804977 | orchestrator | │ ❱ 519 │ │ cloud_images = self.get_images() │ 2025-02-19 09:26:37.804991 | orchestrator | │ 520 │ │ │ 2025-02-19 09:26:37.805005 | orchestrator | │ 521 │ │ existing_images: Set[str] = set() │ 2025-02-19 09:26:37.805019 | orchestrator | │ 522 │ │ imported_image = None │ 2025-02-19 09:26:37.805034 | orchestrator | │ │ 2025-02-19 09:26:37.805048 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:37.805070 | orchestrator | │ │ image = { │ │ 2025-02-19 09:26:37.805091 | orchestrator | │ │ │ 'name': 'Cirros', │ │ 2025-02-19 09:26:37.805105 | orchestrator | │ │ │ 'enable': True, │ │ 2025-02-19 09:26:37.805119 | orchestrator | │ │ │ 'format': 'qcow2', │ │ 2025-02-19 09:26:37.805133 | orchestrator | │ │ │ 'login': 'cirros', │ │ 2025-02-19 09:26:37.805147 | orchestrator | │ │ │ 'password': 'gocubsgo', │ │ 2025-02-19 09:26:37.805161 | orchestrator | │ │ │ 'min_disk': 1, │ │ 2025-02-19 09:26:37.805175 | orchestrator | │ │ │ 'min_ram': 32, │ │ 2025-02-19 09:26:37.805193 | orchestrator | │ │ │ 'status': 'active', │ │ 2025-02-19 09:26:37.805207 | orchestrator | │ │ │ 'visibility': 'public', │ │ 2025-02-19 09:26:37.805221 | orchestrator | │ │ │ 'multi': False, │ │ 2025-02-19 09:26:37.805235 | orchestrator | │ │ │ ... +3 │ │ 2025-02-19 09:26:37.805249 | orchestrator | │ │ } │ │ 2025-02-19 09:26:37.805263 | orchestrator | │ │ meta = { │ │ 2025-02-19 09:26:37.805277 | orchestrator | │ │ │ 'architecture': 'x86_64', │ │ 2025-02-19 09:26:37.805292 | orchestrator | │ │ │ 'hw_disk_bus': 'scsi', │ │ 2025-02-19 09:26:37.805306 | orchestrator | │ │ │ 'hw_rng_model': 'virtio', │ │ 2025-02-19 09:26:37.805320 | orchestrator | │ │ │ 'hw_scsi_model': 'virtio-scsi', │ │ 2025-02-19 09:26:37.805334 | orchestrator | │ │ │ 'hw_watchdog_action': 'reset', │ │ 2025-02-19 09:26:37.805348 | orchestrator | │ │ │ 'hypervisor_type': 'qemu', │ │ 2025-02-19 09:26:37.805362 | orchestrator | │ │ │ 'os_distro': 'cirros', │ │ 2025-02-19 09:26:37.805376 | orchestrator | │ │ │ 'replace_frequency': 'never', │ │ 2025-02-19 09:26:37.805390 | orchestrator | │ │ │ 'uuid_validity': 'none', │ │ 2025-02-19 09:26:37.805404 | orchestrator | │ │ │ 'provided_until': 'none', │ │ 2025-02-19 09:26:37.805433 | orchestrator | │ │ │ ... +2 │ │ 2025-02-19 09:26:37.805448 | orchestrator | │ │ } │ │ 2025-02-19 09:26:37.805463 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:37.805491 | orchestrator | │ │ sorted_versions = ['0.6.2', '0.6.3'] │ │ 2025-02-19 09:26:37.805505 | orchestrator | │ │ versions = { │ │ 2025-02-19 09:26:37.805519 | orchestrator | │ │ │ '0.6.2': { │ │ 2025-02-19 09:26:37.805533 | orchestrator | │ │ │ │ 'url': │ │ 2025-02-19 09:26:37.805547 | orchestrator | │ │ 'https://github.com/cirros-dev/cirros/releases/downlo… │ │ 2025-02-19 09:26:37.805561 | orchestrator | │ │ │ │ 'meta': { │ │ 2025-02-19 09:26:37.805575 | orchestrator | │ │ │ │ │ 'image_source': │ │ 2025-02-19 09:26:37.805595 | orchestrator | │ │ 'https://github.com/cirros-dev/cirros/releases/downlo… │ │ 2025-02-19 09:26:37.805609 | orchestrator | │ │ │ │ │ 'image_build_date': '2023-05-30' │ │ 2025-02-19 09:26:37.805624 | orchestrator | │ │ │ │ } │ │ 2025-02-19 09:26:37.805638 | orchestrator | │ │ │ }, │ │ 2025-02-19 09:26:37.805652 | orchestrator | │ │ │ '0.6.3': { │ │ 2025-02-19 09:26:37.805670 | orchestrator | │ │ │ │ 'url': │ │ 2025-02-19 09:26:37.805684 | orchestrator | │ │ 'https://github.com/cirros-dev/cirros/releases/downlo… │ │ 2025-02-19 09:26:37.805698 | orchestrator | │ │ │ │ 'meta': { │ │ 2025-02-19 09:26:37.805712 | orchestrator | │ │ │ │ │ 'image_source': │ │ 2025-02-19 09:26:37.805726 | orchestrator | │ │ 'https://github.com/cirros-dev/cirros/releases/downlo… │ │ 2025-02-19 09:26:37.805747 | orchestrator | │ │ │ │ │ 'image_build_date': '2024-09-26' │ │ 2025-02-19 09:26:37.805761 | orchestrator | │ │ │ │ } │ │ 2025-02-19 09:26:37.805776 | orchestrator | │ │ │ } │ │ 2025-02-19 09:26:37.805790 | orchestrator | │ │ } │ │ 2025-02-19 09:26:37.805804 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:37.805819 | orchestrator | │ │ 2025-02-19 09:26:37.805833 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack_image_manager/main.py:440 │ 2025-02-19 09:26:37.805847 | orchestrator | │ in get_images │ 2025-02-19 09:26:37.805861 | orchestrator | │ │ 2025-02-19 09:26:37.805875 | orchestrator | │ 437 │ │ """ │ 2025-02-19 09:26:37.805889 | orchestrator | │ 438 │ │ result = {} │ 2025-02-19 09:26:37.805903 | orchestrator | │ 439 │ │ │ 2025-02-19 09:26:37.805917 | orchestrator | │ ❱ 440 │ │ for image in self.conn.image.images(): │ 2025-02-19 09:26:37.805931 | orchestrator | │ 441 │ │ │ if self.CONF.tag in image.tags and ( │ 2025-02-19 09:26:37.805945 | orchestrator | │ 442 │ │ │ │ image.visibility == "public" │ 2025-02-19 09:26:37.805959 | orchestrator | │ 443 │ │ │ │ or image.owner == self.conn.current_project_id │ 2025-02-19 09:26:37.805973 | orchestrator | │ │ 2025-02-19 09:26:37.805987 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:37.806002 | orchestrator | │ │ result = {} │ │ 2025-02-19 09:26:37.806050 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:37.806089 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:37.806112 | orchestrator | │ │ 2025-02-19 09:26:37.806126 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack/service_description.py:89 │ 2025-02-19 09:26:37.806141 | orchestrator | │ in __get__ │ 2025-02-19 09:26:37.806155 | orchestrator | │ │ 2025-02-19 09:26:37.806169 | orchestrator | │ 86 │ │ if instance is None: │ 2025-02-19 09:26:37.806182 | orchestrator | │ 87 │ │ │ return self │ 2025-02-19 09:26:37.806197 | orchestrator | │ 88 │ │ if self.service_type not in instance._proxies: │ 2025-02-19 09:26:37.806214 | orchestrator | │ ❱ 89 │ │ │ proxy = self._make_proxy(instance) │ 2025-02-19 09:26:37.806228 | orchestrator | │ 90 │ │ │ if not isinstance(proxy, _ServiceDisabledProxyShim): │ 2025-02-19 09:26:37.806242 | orchestrator | │ 91 │ │ │ │ # The keystone proxy has a method called get_endpoint │ 2025-02-19 09:26:37.806256 | orchestrator | │ 92 │ │ │ │ # that is about managing keystone endpoints. This is │ 2025-02-19 09:26:37.806270 | orchestrator | │ │ 2025-02-19 09:26:37.806284 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:37.806299 | orchestrator | │ │ instance = │ │ 2025-02-19 09:26:37.806313 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:37.806348 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:38.275606 | orchestrator | │ │ 2025-02-19 09:26:38.275722 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack/service_description.py:291 │ 2025-02-19 09:26:38.275742 | orchestrator | │ in _make_proxy │ 2025-02-19 09:26:38.275757 | orchestrator | │ │ 2025-02-19 09:26:38.275771 | orchestrator | │ 288 │ │ if found_version is None: │ 2025-02-19 09:26:38.275785 | orchestrator | │ 289 │ │ │ region_name = instance.config.get_region_name(self.service │ 2025-02-19 09:26:38.275799 | orchestrator | │ 290 │ │ │ if version_kwargs: │ 2025-02-19 09:26:38.275813 | orchestrator | │ ❱ 291 │ │ │ │ raise exceptions.NotSupported( │ 2025-02-19 09:26:38.275827 | orchestrator | │ 292 │ │ │ │ │ f"The {self.service_type} service for " │ 2025-02-19 09:26:38.275841 | orchestrator | │ 293 │ │ │ │ │ f"{instance.name}:{region_name} exists but does no │ 2025-02-19 09:26:38.275854 | orchestrator | │ 294 │ │ │ │ │ f"any supported versions." │ 2025-02-19 09:26:38.275868 | orchestrator | │ │ 2025-02-19 09:26:38.275883 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:38.275924 | orchestrator | │ │ config = │ │ 2025-02-19 09:26:38.275953 | orchestrator | │ │ endpoint_override = None │ │ 2025-02-19 09:26:38.275967 | orchestrator | │ │ found_version = None │ │ 2025-02-19 09:26:38.275981 | orchestrator | │ │ instance = │ │ 2025-02-19 09:26:38.276010 | orchestrator | │ │ proxy_obj = None │ │ 2025-02-19 09:26:38.276024 | orchestrator | │ │ region_name = '' │ │ 2025-02-19 09:26:38.276038 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:38.276069 | orchestrator | │ │ supported_versions = [1, 2] │ │ 2025-02-19 09:26:38.276083 | orchestrator | │ │ temp_adapter = │ │ 2025-02-19 09:26:38.276108 | orchestrator | │ │ version_kwargs = {'min_version': '1', 'max_version': '2.latest'} │ │ 2025-02-19 09:26:38.276123 | orchestrator | │ │ version_string = None │ │ 2025-02-19 09:26:38.276141 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:38.276161 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-02-19 09:26:38.276177 | orchestrator | NotSupported: The image service for admin: exists but does not have any 2025-02-19 09:26:38.276194 | orchestrator | supported versions. 2025-02-19 09:26:38.276228 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-02-19 09:26:40.142862 | orchestrator | 2025-02-19 09:26:40 | INFO  | date: 2025-02-19 2025-02-19 09:26:40.181676 | orchestrator | 2025-02-19 09:26:40 | INFO  | image: octavia-amphora-haproxy-2024.1.20250219.qcow2 2025-02-19 09:26:40.181802 | orchestrator | 2025-02-19 09:26:40 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250219.qcow2 2025-02-19 09:26:40.181825 | orchestrator | 2025-02-19 09:26:40 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250219.qcow2.CHECKSUM 2025-02-19 09:26:40.181859 | orchestrator | 2025-02-19 09:26:40 | INFO  | checksum: 70f236f50fe253c47351838b31ae5002c52071dc75b931a33b3fc6e8c8b9e64e 2025-02-19 09:26:41.774147 | orchestrator | Failed to contact the endpoint at https://api.testbed.osism.xyz:9292 for discovery. Fallback to using that endpoint as the base url. 2025-02-19 09:26:41.776994 | orchestrator | Failed to contact the endpoint at https://api.testbed.osism.xyz:9292 for discovery. Fallback to using that endpoint as the base url. 2025-02-19 09:26:42.389815 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-02-19 09:26:42.389920 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack_image_manager/main.py:126 │ 2025-02-19 09:26:42.389939 | orchestrator | │ in create_cli_args │ 2025-02-19 09:26:42.389974 | orchestrator | │ │ 2025-02-19 09:26:42.389987 | orchestrator | │ 123 │ │ logger.add(sys.stderr, format=log_fmt, level=level, colorize= │ 2025-02-19 09:26:42.389998 | orchestrator | │ 124 │ │ │ 2025-02-19 09:26:42.390010 | orchestrator | │ 125 │ │ if __name__ == "__main__" or __name__ == "openstack_image_man │ 2025-02-19 09:26:42.390076 | orchestrator | │ ❱ 126 │ │ │ self.main() │ 2025-02-19 09:26:42.390088 | orchestrator | │ 127 │ │ 2025-02-19 09:26:42.390099 | orchestrator | │ 128 │ def read_image_files(self, return_all_images=False) -> list: │ 2025-02-19 09:26:42.390111 | orchestrator | │ 129 │ │ """Read all YAML files in self.CONF.images""" │ 2025-02-19 09:26:42.390123 | orchestrator | │ │ 2025-02-19 09:26:42.390135 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:42.390161 | orchestrator | │ │ check = True │ │ 2025-02-19 09:26:42.390172 | orchestrator | │ │ check_age = False │ │ 2025-02-19 09:26:42.390183 | orchestrator | │ │ check_only = False │ │ 2025-02-19 09:26:42.390194 | orchestrator | │ │ cloud = 'octavia' │ │ 2025-02-19 09:26:42.390206 | orchestrator | │ │ deactivate = True │ │ 2025-02-19 09:26:42.390217 | orchestrator | │ │ debug = False │ │ 2025-02-19 09:26:42.390228 | orchestrator | │ │ delete = False │ │ 2025-02-19 09:26:42.390240 | orchestrator | │ │ dry_run = False │ │ 2025-02-19 09:26:42.390251 | orchestrator | │ │ filter = None │ │ 2025-02-19 09:26:42.390262 | orchestrator | │ │ force = False │ │ 2025-02-19 09:26:42.390273 | orchestrator | │ │ hide = False │ │ 2025-02-19 09:26:42.390284 | orchestrator | │ │ images = '/tmp/octavia' │ │ 2025-02-19 09:26:42.390296 | orchestrator | │ │ keep = False │ │ 2025-02-19 09:26:42.390307 | orchestrator | │ │ latest = False │ │ 2025-02-19 09:26:42.390318 | orchestrator | │ │ level = 'INFO' │ │ 2025-02-19 09:26:42.390329 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} │ │ 2025-02-19 09:26:42.390340 | orchestrator | │ │ | {level: <8} | '+17 │ │ 2025-02-19 09:26:42.390353 | orchestrator | │ │ max_age = 90 │ │ 2025-02-19 09:26:42.390370 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:42.390395 | orchestrator | │ │ share_action = 'add' │ │ 2025-02-19 09:26:42.390407 | orchestrator | │ │ share_domain = 'default' │ │ 2025-02-19 09:26:42.390444 | orchestrator | │ │ share_image = None │ │ 2025-02-19 09:26:42.390458 | orchestrator | │ │ share_target = None │ │ 2025-02-19 09:26:42.390488 | orchestrator | │ │ share_type = 'project' │ │ 2025-02-19 09:26:42.390500 | orchestrator | │ │ tag = 'managed_by_osism' │ │ 2025-02-19 09:26:42.390512 | orchestrator | │ │ use_os_hidden = False │ │ 2025-02-19 09:26:42.390527 | orchestrator | │ │ yes_i_really_know_what_i_do = False │ │ 2025-02-19 09:26:42.390540 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:42.390565 | orchestrator | │ │ 2025-02-19 09:26:42.390578 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack_image_manager/main.py:253 │ 2025-02-19 09:26:42.390591 | orchestrator | │ in main │ 2025-02-19 09:26:42.390603 | orchestrator | │ │ 2025-02-19 09:26:42.390615 | orchestrator | │ 250 │ │ else: │ 2025-02-19 09:26:42.390628 | orchestrator | │ 251 │ │ │ self.create_connection() │ 2025-02-19 09:26:42.390641 | orchestrator | │ 252 │ │ │ images = self.read_image_files() │ 2025-02-19 09:26:42.390653 | orchestrator | │ ❱ 253 │ │ │ managed_images = self.process_images(images) │ 2025-02-19 09:26:42.390667 | orchestrator | │ 254 │ │ │ │ 2025-02-19 09:26:42.390679 | orchestrator | │ 255 │ │ │ # ignore all non-specified images when using --filter │ 2025-02-19 09:26:42.390692 | orchestrator | │ 256 │ │ │ if self.CONF.filter: │ 2025-02-19 09:26:42.390704 | orchestrator | │ │ 2025-02-19 09:26:42.390716 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:42.390727 | orchestrator | │ │ images = [ │ │ 2025-02-19 09:26:42.390739 | orchestrator | │ │ │ { │ │ 2025-02-19 09:26:42.390750 | orchestrator | │ │ │ │ 'name': 'OpenStack Octavia Amphora', │ │ 2025-02-19 09:26:42.390761 | orchestrator | │ │ │ │ 'enable': True, │ │ 2025-02-19 09:26:42.390773 | orchestrator | │ │ │ │ 'shortname': 'amphora', │ │ 2025-02-19 09:26:42.390784 | orchestrator | │ │ │ │ 'format': 'qcow2', │ │ 2025-02-19 09:26:42.390796 | orchestrator | │ │ │ │ 'login': 'ubuntu', │ │ 2025-02-19 09:26:42.390807 | orchestrator | │ │ │ │ 'min_disk': 2, │ │ 2025-02-19 09:26:42.390818 | orchestrator | │ │ │ │ 'min_ram': 512, │ │ 2025-02-19 09:26:42.390830 | orchestrator | │ │ │ │ 'status': 'active', │ │ 2025-02-19 09:26:42.390844 | orchestrator | │ │ │ │ 'visibility': 'private', │ │ 2025-02-19 09:26:42.390856 | orchestrator | │ │ │ │ 'multi': False, │ │ 2025-02-19 09:26:42.390867 | orchestrator | │ │ │ │ ... +3 │ │ 2025-02-19 09:26:42.390879 | orchestrator | │ │ │ } │ │ 2025-02-19 09:26:42.390896 | orchestrator | │ │ ] │ │ 2025-02-19 09:26:42.390908 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:42.390931 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:42.390942 | orchestrator | │ │ 2025-02-19 09:26:42.390953 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack_image_manager/main.py:370 │ 2025-02-19 09:26:42.390965 | orchestrator | │ in process_images │ 2025-02-19 09:26:42.390976 | orchestrator | │ │ 2025-02-19 09:26:42.390987 | orchestrator | │ 367 │ │ │ if "image_name" not in image["meta"]: │ 2025-02-19 09:26:42.390998 | orchestrator | │ 368 │ │ │ │ image["meta"]["image_name"] = image["name"] │ 2025-02-19 09:26:42.391010 | orchestrator | │ 369 │ │ │ │ 2025-02-19 09:26:42.391021 | orchestrator | │ ❱ 370 │ │ │ existing_images, imported_image, previous_image = self.pr │ 2025-02-19 09:26:42.391032 | orchestrator | │ 371 │ │ │ │ image, versions, sorted_versions, image["meta"].copy( │ 2025-02-19 09:26:42.391044 | orchestrator | │ 372 │ │ │ ) │ 2025-02-19 09:26:42.391055 | orchestrator | │ 373 │ │ │ managed_images = managed_images.union(existing_images) │ 2025-02-19 09:26:42.391071 | orchestrator | │ │ 2025-02-19 09:26:42.391083 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:42.391095 | orchestrator | │ │ image = { │ │ 2025-02-19 09:26:42.391106 | orchestrator | │ │ │ 'name': 'OpenStack Octavia Amphora', │ │ 2025-02-19 09:26:42.391117 | orchestrator | │ │ │ 'enable': True, │ │ 2025-02-19 09:26:42.391128 | orchestrator | │ │ │ 'shortname': 'amphora', │ │ 2025-02-19 09:26:42.391143 | orchestrator | │ │ │ 'format': 'qcow2', │ │ 2025-02-19 09:26:42.391154 | orchestrator | │ │ │ 'login': 'ubuntu', │ │ 2025-02-19 09:26:42.391166 | orchestrator | │ │ │ 'min_disk': 2, │ │ 2025-02-19 09:26:42.391178 | orchestrator | │ │ │ 'min_ram': 512, │ │ 2025-02-19 09:26:42.391189 | orchestrator | │ │ │ 'status': 'active', │ │ 2025-02-19 09:26:42.391200 | orchestrator | │ │ │ 'visibility': 'private', │ │ 2025-02-19 09:26:42.391212 | orchestrator | │ │ │ 'multi': False, │ │ 2025-02-19 09:26:42.391223 | orchestrator | │ │ │ ... +3 │ │ 2025-02-19 09:26:42.391237 | orchestrator | │ │ } │ │ 2025-02-19 09:26:42.391249 | orchestrator | │ │ images = [ │ │ 2025-02-19 09:26:42.391260 | orchestrator | │ │ │ { │ │ 2025-02-19 09:26:42.391277 | orchestrator | │ │ │ │ 'name': 'OpenStack Octavia Amphora', │ │ 2025-02-19 09:26:42.391288 | orchestrator | │ │ │ │ 'enable': True, │ │ 2025-02-19 09:26:42.391300 | orchestrator | │ │ │ │ 'shortname': 'amphora', │ │ 2025-02-19 09:26:42.391311 | orchestrator | │ │ │ │ 'format': 'qcow2', │ │ 2025-02-19 09:26:42.391322 | orchestrator | │ │ │ │ 'login': 'ubuntu', │ │ 2025-02-19 09:26:42.391334 | orchestrator | │ │ │ │ 'min_disk': 2, │ │ 2025-02-19 09:26:42.391345 | orchestrator | │ │ │ │ 'min_ram': 512, │ │ 2025-02-19 09:26:42.391356 | orchestrator | │ │ │ │ 'status': 'active', │ │ 2025-02-19 09:26:42.391367 | orchestrator | │ │ │ │ 'visibility': 'private', │ │ 2025-02-19 09:26:42.391379 | orchestrator | │ │ │ │ 'multi': False, │ │ 2025-02-19 09:26:42.391390 | orchestrator | │ │ │ │ ... +3 │ │ 2025-02-19 09:26:42.391401 | orchestrator | │ │ │ } │ │ 2025-02-19 09:26:42.391412 | orchestrator | │ │ ] │ │ 2025-02-19 09:26:42.391470 | orchestrator | │ │ managed_images = set() │ │ 2025-02-19 09:26:42.391483 | orchestrator | │ │ required_key = 'visibility' │ │ 2025-02-19 09:26:42.391494 | orchestrator | │ │ REQUIRED_KEYS = [ │ │ 2025-02-19 09:26:42.391506 | orchestrator | │ │ │ 'format', │ │ 2025-02-19 09:26:42.391517 | orchestrator | │ │ │ 'name', │ │ 2025-02-19 09:26:42.391528 | orchestrator | │ │ │ 'login', │ │ 2025-02-19 09:26:42.391540 | orchestrator | │ │ │ 'status', │ │ 2025-02-19 09:26:42.391551 | orchestrator | │ │ │ 'versions', │ │ 2025-02-19 09:26:42.391562 | orchestrator | │ │ │ 'visibility' │ │ 2025-02-19 09:26:42.391573 | orchestrator | │ │ ] │ │ 2025-02-19 09:26:42.391584 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:42.391607 | orchestrator | │ │ sorted_versions = ['2025-02-19'] │ │ 2025-02-19 09:26:42.391618 | orchestrator | │ │ url = 'https://swift.services.a.regiocloud.tech/swift/v1/AU… │ │ 2025-02-19 09:26:42.391635 | orchestrator | │ │ version = { │ │ 2025-02-19 09:26:42.391647 | orchestrator | │ │ │ 'version': '2025-02-19', │ │ 2025-02-19 09:26:42.391661 | orchestrator | │ │ │ 'url': │ │ 2025-02-19 09:26:42.391672 | orchestrator | │ │ 'https://swift.services.a.regiocloud.tech/swift/v1/AU… │ │ 2025-02-19 09:26:42.391684 | orchestrator | │ │ │ 'checksum': │ │ 2025-02-19 09:26:42.391696 | orchestrator | │ │ 'sha256:70f236f50fe253c47351838b31ae5002c52071dc75b93… │ │ 2025-02-19 09:26:42.391707 | orchestrator | │ │ │ 'build_date': datetime.date(2025, 2, 19) │ │ 2025-02-19 09:26:42.391724 | orchestrator | │ │ } │ │ 2025-02-19 09:26:42.391736 | orchestrator | │ │ versions = { │ │ 2025-02-19 09:26:42.391747 | orchestrator | │ │ │ '2025-02-19': { │ │ 2025-02-19 09:26:42.391758 | orchestrator | │ │ │ │ 'url': │ │ 2025-02-19 09:26:42.391770 | orchestrator | │ │ 'https://swift.services.a.regiocloud.tech/swift/v1/AU… │ │ 2025-02-19 09:26:42.391781 | orchestrator | │ │ │ │ 'meta': { │ │ 2025-02-19 09:26:42.391792 | orchestrator | │ │ │ │ │ 'image_source': │ │ 2025-02-19 09:26:42.391803 | orchestrator | │ │ 'https://swift.services.a.regiocloud.tech/swift/v1/AU… │ │ 2025-02-19 09:26:42.391814 | orchestrator | │ │ │ │ │ 'image_build_date': '2025-02-19' │ │ 2025-02-19 09:26:42.391825 | orchestrator | │ │ │ │ } │ │ 2025-02-19 09:26:42.391836 | orchestrator | │ │ │ } │ │ 2025-02-19 09:26:42.391847 | orchestrator | │ │ } │ │ 2025-02-19 09:26:42.391859 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:42.391870 | orchestrator | │ │ 2025-02-19 09:26:42.391882 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack_image_manager/main.py:519 │ 2025-02-19 09:26:42.391893 | orchestrator | │ in process_image │ 2025-02-19 09:26:42.391904 | orchestrator | │ │ 2025-02-19 09:26:42.391915 | orchestrator | │ 516 │ │ Returns: │ 2025-02-19 09:26:42.391926 | orchestrator | │ 517 │ │ │ Tuple with (existing_images, imported_image, previous_ima │ 2025-02-19 09:26:42.391937 | orchestrator | │ 518 │ │ """ │ 2025-02-19 09:26:42.391948 | orchestrator | │ ❱ 519 │ │ cloud_images = self.get_images() │ 2025-02-19 09:26:42.391959 | orchestrator | │ 520 │ │ │ 2025-02-19 09:26:42.391971 | orchestrator | │ 521 │ │ existing_images: Set[str] = set() │ 2025-02-19 09:26:42.391982 | orchestrator | │ 522 │ │ imported_image = None │ 2025-02-19 09:26:42.391993 | orchestrator | │ │ 2025-02-19 09:26:42.392005 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:42.392016 | orchestrator | │ │ image = { │ │ 2025-02-19 09:26:42.392027 | orchestrator | │ │ │ 'name': 'OpenStack Octavia Amphora', │ │ 2025-02-19 09:26:42.392038 | orchestrator | │ │ │ 'enable': True, │ │ 2025-02-19 09:26:42.392052 | orchestrator | │ │ │ 'shortname': 'amphora', │ │ 2025-02-19 09:26:42.392063 | orchestrator | │ │ │ 'format': 'qcow2', │ │ 2025-02-19 09:26:42.392074 | orchestrator | │ │ │ 'login': 'ubuntu', │ │ 2025-02-19 09:26:42.392092 | orchestrator | │ │ │ 'min_disk': 2, │ │ 2025-02-19 09:26:42.392103 | orchestrator | │ │ │ 'min_ram': 512, │ │ 2025-02-19 09:26:42.392115 | orchestrator | │ │ │ 'status': 'active', │ │ 2025-02-19 09:26:42.392130 | orchestrator | │ │ │ 'visibility': 'private', │ │ 2025-02-19 09:26:42.392142 | orchestrator | │ │ │ 'multi': False, │ │ 2025-02-19 09:26:42.392153 | orchestrator | │ │ │ ... +3 │ │ 2025-02-19 09:26:42.392164 | orchestrator | │ │ } │ │ 2025-02-19 09:26:42.392176 | orchestrator | │ │ meta = { │ │ 2025-02-19 09:26:42.392187 | orchestrator | │ │ │ 'architecture': 'x86_64', │ │ 2025-02-19 09:26:42.392204 | orchestrator | │ │ │ 'hw_disk_bus': 'scsi', │ │ 2025-02-19 09:26:42.392215 | orchestrator | │ │ │ 'hw_rng_model': 'virtio', │ │ 2025-02-19 09:26:42.392227 | orchestrator | │ │ │ 'hw_scsi_model': 'virtio-scsi', │ │ 2025-02-19 09:26:42.392238 | orchestrator | │ │ │ 'hw_watchdog_action': 'reset', │ │ 2025-02-19 09:26:42.392249 | orchestrator | │ │ │ 'hypervisor_type': 'qemu', │ │ 2025-02-19 09:26:42.392260 | orchestrator | │ │ │ 'os_distro': 'ubuntu', │ │ 2025-02-19 09:26:42.392271 | orchestrator | │ │ │ 'replace_frequency': 'quarterly', │ │ 2025-02-19 09:26:42.392283 | orchestrator | │ │ │ 'uuid_validity': 'last-1', │ │ 2025-02-19 09:26:42.392294 | orchestrator | │ │ │ 'provided_until': 'none', │ │ 2025-02-19 09:26:42.392305 | orchestrator | │ │ │ ... +2 │ │ 2025-02-19 09:26:42.392316 | orchestrator | │ │ } │ │ 2025-02-19 09:26:42.392327 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:42.392350 | orchestrator | │ │ sorted_versions = ['2025-02-19'] │ │ 2025-02-19 09:26:42.392361 | orchestrator | │ │ versions = { │ │ 2025-02-19 09:26:42.392372 | orchestrator | │ │ │ '2025-02-19': { │ │ 2025-02-19 09:26:42.392383 | orchestrator | │ │ │ │ 'url': │ │ 2025-02-19 09:26:42.392395 | orchestrator | │ │ 'https://swift.services.a.regiocloud.tech/swift/v1/AU… │ │ 2025-02-19 09:26:42.392406 | orchestrator | │ │ │ │ 'meta': { │ │ 2025-02-19 09:26:42.392417 | orchestrator | │ │ │ │ │ 'image_source': │ │ 2025-02-19 09:26:42.392450 | orchestrator | │ │ 'https://swift.services.a.regiocloud.tech/swift/v1/AU… │ │ 2025-02-19 09:26:42.392462 | orchestrator | │ │ │ │ │ 'image_build_date': '2025-02-19' │ │ 2025-02-19 09:26:42.392473 | orchestrator | │ │ │ │ } │ │ 2025-02-19 09:26:42.392484 | orchestrator | │ │ │ } │ │ 2025-02-19 09:26:42.392496 | orchestrator | │ │ } │ │ 2025-02-19 09:26:42.392513 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:42.392525 | orchestrator | │ │ 2025-02-19 09:26:42.392536 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack_image_manager/main.py:440 │ 2025-02-19 09:26:42.392547 | orchestrator | │ in get_images │ 2025-02-19 09:26:42.392559 | orchestrator | │ │ 2025-02-19 09:26:42.392569 | orchestrator | │ 437 │ │ """ │ 2025-02-19 09:26:42.392581 | orchestrator | │ 438 │ │ result = {} │ 2025-02-19 09:26:42.392592 | orchestrator | │ 439 │ │ │ 2025-02-19 09:26:42.392603 | orchestrator | │ ❱ 440 │ │ for image in self.conn.image.images(): │ 2025-02-19 09:26:42.392614 | orchestrator | │ 441 │ │ │ if self.CONF.tag in image.tags and ( │ 2025-02-19 09:26:42.392625 | orchestrator | │ 442 │ │ │ │ image.visibility == "public" │ 2025-02-19 09:26:42.392636 | orchestrator | │ 443 │ │ │ │ or image.owner == self.conn.current_project_id │ 2025-02-19 09:26:42.392647 | orchestrator | │ │ 2025-02-19 09:26:42.392664 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:42.392676 | orchestrator | │ │ result = {} │ │ 2025-02-19 09:26:42.392687 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:42.392710 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:42.392721 | orchestrator | │ │ 2025-02-19 09:26:42.392733 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack/service_description.py:89 │ 2025-02-19 09:26:42.392744 | orchestrator | │ in __get__ │ 2025-02-19 09:26:42.392755 | orchestrator | │ │ 2025-02-19 09:26:42.392766 | orchestrator | │ 86 │ │ if instance is None: │ 2025-02-19 09:26:42.392778 | orchestrator | │ 87 │ │ │ return self │ 2025-02-19 09:26:42.392789 | orchestrator | │ 88 │ │ if self.service_type not in instance._proxies: │ 2025-02-19 09:26:42.392800 | orchestrator | │ ❱ 89 │ │ │ proxy = self._make_proxy(instance) │ 2025-02-19 09:26:42.392811 | orchestrator | │ 90 │ │ │ if not isinstance(proxy, _ServiceDisabledProxyShim): │ 2025-02-19 09:26:42.392822 | orchestrator | │ 91 │ │ │ │ # The keystone proxy has a method called get_endpoint │ 2025-02-19 09:26:42.392833 | orchestrator | │ 92 │ │ │ │ # that is about managing keystone endpoints. This is │ 2025-02-19 09:26:42.392849 | orchestrator | │ │ 2025-02-19 09:26:42.392861 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:42.392877 | orchestrator | │ │ instance = │ │ 2025-02-19 09:26:42.392889 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:42.392911 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:42.392923 | orchestrator | │ │ 2025-02-19 09:26:42.392934 | orchestrator | │ /usr/local/lib/python3.12/site-packages/openstack/service_description.py:291 │ 2025-02-19 09:26:42.392945 | orchestrator | │ in _make_proxy │ 2025-02-19 09:26:42.392956 | orchestrator | │ │ 2025-02-19 09:26:42.392967 | orchestrator | │ 288 │ │ if found_version is None: │ 2025-02-19 09:26:42.392978 | orchestrator | │ 289 │ │ │ region_name = instance.config.get_region_name(self.service │ 2025-02-19 09:26:42.392989 | orchestrator | │ 290 │ │ │ if version_kwargs: │ 2025-02-19 09:26:42.393001 | orchestrator | │ ❱ 291 │ │ │ │ raise exceptions.NotSupported( │ 2025-02-19 09:26:42.393012 | orchestrator | │ 292 │ │ │ │ │ f"The {self.service_type} service for " │ 2025-02-19 09:26:42.393023 | orchestrator | │ 293 │ │ │ │ │ f"{instance.name}:{region_name} exists but does no │ 2025-02-19 09:26:42.393034 | orchestrator | │ 294 │ │ │ │ │ f"any supported versions." │ 2025-02-19 09:26:42.393045 | orchestrator | │ │ 2025-02-19 09:26:42.393057 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-02-19 09:26:42.393068 | orchestrator | │ │ config = │ │ 2025-02-19 09:26:42.393090 | orchestrator | │ │ endpoint_override = None │ │ 2025-02-19 09:26:42.393106 | orchestrator | │ │ found_version = None │ │ 2025-02-19 09:26:42.848939 | orchestrator | │ │ instance = │ │ 2025-02-19 09:26:42.849579 | orchestrator | │ │ proxy_obj = None │ │ 2025-02-19 09:26:42.849588 | orchestrator | │ │ region_name = '' │ │ 2025-02-19 09:26:42.849593 | orchestrator | │ │ self = │ │ 2025-02-19 09:26:42.849604 | orchestrator | │ │ supported_versions = [1, 2] │ │ 2025-02-19 09:26:42.849610 | orchestrator | │ │ temp_adapter = │ │ 2025-02-19 09:26:42.849615 | orchestrator | │ │ version_kwargs = {'min_version': '1', 'max_version': '2.latest'} │ │ 2025-02-19 09:26:42.849620 | orchestrator | │ │ version_string = None │ │ 2025-02-19 09:26:42.849639 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-02-19 09:26:42.849648 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-02-19 09:26:42.849655 | orchestrator | NotSupported: The image service for octavia: exists but does not have any 2025-02-19 09:26:42.849661 | orchestrator | supported versions. 2025-02-19 09:26:43.097782 | orchestrator | changed 2025-02-19 09:26:43.120094 | 2025-02-19 09:26:43.120215 | TASK [Run checks] 2025-02-19 09:26:43.808045 | orchestrator | + set -e 2025-02-19 09:26:43.809166 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-19 09:26:43.809211 | orchestrator | ++ export INTERACTIVE=false 2025-02-19 09:26:43.809229 | orchestrator | ++ INTERACTIVE=false 2025-02-19 09:26:43.809276 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-19 09:26:43.809296 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-19 09:26:43.809312 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-02-19 09:26:43.809352 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-02-19 09:26:43.845569 | orchestrator | 2025-02-19 09:26:43.845829 | orchestrator | # CHECK 2025-02-19 09:26:43.845860 | orchestrator | 2025-02-19 09:26:43.845876 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-19 09:26:43.845892 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-19 09:26:43.845906 | orchestrator | + echo 2025-02-19 09:26:43.845920 | orchestrator | + echo '# CHECK' 2025-02-19 09:26:43.845935 | orchestrator | + echo 2025-02-19 09:26:43.845951 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-19 09:26:43.845975 | orchestrator | ++ semver latest 5.0.0 2025-02-19 09:26:43.902348 | orchestrator | 2025-02-19 09:26:46.353996 | orchestrator | ## Containers @ testbed-manager 2025-02-19 09:26:46.354133 | orchestrator | 2025-02-19 09:26:46.354144 | orchestrator | + [[ -1 -eq -1 ]] 2025-02-19 09:26:46.354153 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-19 09:26:46.354162 | orchestrator | + echo 2025-02-19 09:26:46.354172 | orchestrator | + echo '## Containers @ testbed-manager' 2025-02-19 09:26:46.354182 | orchestrator | + echo 2025-02-19 09:26:46.354191 | orchestrator | + osism container testbed-manager ps 2025-02-19 09:26:46.354233 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-19 09:26:46.354246 | orchestrator | 99dd7f8559fa registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1 "dumb-init --single-…" 12 minutes ago Up 11 minutes prometheus_blackbox_exporter 2025-02-19 09:26:46.354259 | orchestrator | b270e5a2888a registry.osism.tech/kolla/prometheus-alertmanager:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2025-02-19 09:26:46.354270 | orchestrator | 73ab5c3b1fd5 registry.osism.tech/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-02-19 09:26:46.354285 | orchestrator | 620624fe5da0 registry.osism.tech/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-02-19 09:26:46.354294 | orchestrator | 5e7696ac0097 registry.osism.tech/kolla/prometheus-v2-server:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-02-19 09:26:46.354315 | orchestrator | fa4b1ba04dac registry.osism.tech/kolla/cron:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes cron 2025-02-19 09:26:46.354325 | orchestrator | bf07a2d4752b registry.osism.tech/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes kolla_toolbox 2025-02-19 09:26:46.354334 | orchestrator | 32cb67cfb197 registry.osism.tech/kolla/fluentd:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes fluentd 2025-02-19 09:26:46.354342 | orchestrator | 0a585395cf84 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 36 minutes ago Up 35 minutes (healthy) 80/tcp phpmyadmin 2025-02-19 09:26:46.354370 | orchestrator | 6c412e93f9f3 registry.osism.tech/osism/openstackclient:2024.1 "/usr/bin/dumb-init …" 36 minutes ago Up 36 minutes openstackclient 2025-02-19 09:26:46.354379 | orchestrator | fce486c298bd registry.osism.tech/osism/homer:v25.02.1 "/bin/sh /entrypoint…" 36 minutes ago Up 36 minutes (healthy) 8080/tcp homer 2025-02-19 09:26:46.354387 | orchestrator | 6fe66e699a09 ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 55 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-02-19 09:26:46.354400 | orchestrator | 6ab949df6f0c registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 59 minutes (healthy) manager-inventory_reconciler-1 2025-02-19 09:26:46.354408 | orchestrator | e03f70f092c2 registry.osism.tech/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) kolla-ansible 2025-02-19 09:26:46.354417 | orchestrator | 334892191fbb registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) osism-ansible 2025-02-19 09:26:46.354454 | orchestrator | 4e9d6ddc055b registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) osism-kubernetes 2025-02-19 09:26:46.354463 | orchestrator | e0dcae141aef registry.osism.tech/osism/ceph-ansible:quincy "/entrypoint.sh osis…" About an hour ago Up About an hour (healthy) ceph-ansible 2025-02-19 09:26:46.354472 | orchestrator | 7b5e61cc7431 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" About an hour ago Up About an hour (healthy) 8000/tcp manager-ara-server-1 2025-02-19 09:26:46.354481 | orchestrator | 259409fdeda4 redis:7.4.2-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 6379/tcp manager-redis-1 2025-02-19 09:26:46.354489 | orchestrator | 6745d456876c mariadb:11.6.2 "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 3306/tcp manager-mariadb-1 2025-02-19 09:26:46.354498 | orchestrator | 17ecd06ed703 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- sl…" About an hour ago Up About an hour (healthy) osismclient 2025-02-19 09:26:46.354507 | orchestrator | 315db33c63fa registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-watchdog-1 2025-02-19 09:26:46.354519 | orchestrator | e3ab6004d8b2 registry.osism.tech/osism/osism-netbox:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-netbox-1 2025-02-19 09:26:46.354537 | orchestrator | 8528725401d3 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-conductor-1 2025-02-19 09:26:46.354551 | orchestrator | 4d0744d1a028 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-02-19 09:26:46.354561 | orchestrator | 1e32b9b62170 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-openstack-1 2025-02-19 09:26:46.354571 | orchestrator | 82678c00e5d9 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-beat-1 2025-02-19 09:26:46.354580 | orchestrator | b3e8a16604ce registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-flower-1 2025-02-19 09:26:46.354590 | orchestrator | f2523da1e15c registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" About an hour ago Up About an hour (healthy) manager-listener-1 2025-02-19 09:26:46.354600 | orchestrator | 21bca832e30e registry.osism.tech/osism/netbox:v4.1.10 "/opt/netbox/venv/bi…" About an hour ago Up About an hour (healthy) netbox-netbox-worker-1 2025-02-19 09:26:46.354612 | orchestrator | 6ed78e4ce4b8 registry.osism.tech/osism/netbox:v4.1.10 "/usr/bin/tini -- /o…" About an hour ago Up About an hour (healthy) netbox-netbox-1 2025-02-19 09:26:46.354630 | orchestrator | e7750837ae59 postgres:16.6-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 5432/tcp netbox-postgres-1 2025-02-19 09:26:46.713015 | orchestrator | 75f2e791b741 redis:7.4.2-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 6379/tcp netbox-redis-1 2025-02-19 09:26:46.713148 | orchestrator | 5e4fecfddcc3 traefik:v3.3.3 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-02-19 09:26:46.713190 | orchestrator | 2025-02-19 09:26:49.250189 | orchestrator | ## Images @ testbed-manager 2025-02-19 09:26:49.250357 | orchestrator | 2025-02-19 09:26:49.250381 | orchestrator | + echo 2025-02-19 09:26:49.250397 | orchestrator | + echo '## Images @ testbed-manager' 2025-02-19 09:26:49.250413 | orchestrator | + echo 2025-02-19 09:26:49.250460 | orchestrator | + osism container testbed-manager images 2025-02-19 09:26:49.250499 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-19 09:26:49.536547 | orchestrator | registry.osism.tech/osism/osism-ansible latest ac8e0c077f51 About an hour ago 1.39GB 2025-02-19 09:26:49.536663 | orchestrator | 5608597ff277 About an hour ago 1.39GB 2025-02-19 09:26:49.536674 | orchestrator | registry.osism.tech/osism/osism-netbox latest 0ccce33096b0 2 hours ago 899MB 2025-02-19 09:26:49.536683 | orchestrator | registry.osism.tech/osism/osism latest d8fb5584e0dd 2 hours ago 841MB 2025-02-19 09:26:49.536712 | orchestrator | registry.osism.tech/osism/homer v25.02.1 6951c802a9fb 6 hours ago 17.7MB 2025-02-19 09:26:49.536721 | orchestrator | registry.osism.tech/kolla/cron 2024.1 98ff1ec9784a 8 hours ago 387MB 2025-02-19 09:26:49.536731 | orchestrator | registry.osism.tech/kolla/fluentd 2024.1 07c7aabf23f8 8 hours ago 789MB 2025-02-19 09:26:49.536740 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.1 028abe724e22 8 hours ago 969MB 2025-02-19 09:26:49.536748 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.1 0a5db6608f13 8 hours ago 446MB 2025-02-19 09:26:49.536772 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.1 c0343dd85517 8 hours ago 450MB 2025-02-19 09:26:49.536781 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.1 1145a3c22ef8 8 hours ago 1.09GB 2025-02-19 09:26:49.536789 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.1 dc241803e050 8 hours ago 524MB 2025-02-19 09:26:49.536797 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.1 7c5244ec3385 8 hours ago 582MB 2025-02-19 09:26:49.536805 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.1 f3123b500809 9 hours ago 910MB 2025-02-19 09:26:49.536813 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 04d1b0649c51 9 hours ago 1.41GB 2025-02-19 09:26:49.536821 | orchestrator | registry.osism.tech/osism/ceph-ansible quincy 8a08b9feb885 9 hours ago 762MB 2025-02-19 09:26:49.536829 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 1e559b8cbb5b 9 hours ago 405MB 2025-02-19 09:26:49.536837 | orchestrator | registry.osism.tech/osism/openstackclient 2024.1 2c952fe327a5 13 days ago 365MB 2025-02-19 09:26:49.536845 | orchestrator | postgres 16.6-alpine 1d04b9ba1d49 2 weeks ago 393MB 2025-02-19 09:26:49.536853 | orchestrator | traefik v3.3.3 19884a9d0b92 2 weeks ago 245MB 2025-02-19 09:26:49.536861 | orchestrator | hashicorp/vault 1.18.4 790a848da73e 2 weeks ago 660MB 2025-02-19 09:26:49.536870 | orchestrator | phpmyadmin/phpmyadmin 5.2 95e01f723b5e 3 weeks ago 814MB 2025-02-19 09:26:49.536879 | orchestrator | redis 7.4.2-alpine 02419de7eddf 6 weeks ago 60.6MB 2025-02-19 09:26:49.536887 | orchestrator | registry.osism.tech/osism/netbox v4.1.10 44985fcfbb33 8 weeks ago 1.19GB 2025-02-19 09:26:49.536895 | orchestrator | mariadb 11.6.2 bfb1298c06cd 2 months ago 562MB 2025-02-19 09:26:49.536903 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 fc2de914b13a 5 months ago 476MB 2025-02-19 09:26:49.536911 | orchestrator | ubuntu/squid 6.1-23.10_beta fbc0312a9b70 8 months ago 219MB 2025-02-19 09:26:49.536935 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-19 09:26:49.586136 | orchestrator | ++ semver latest 5.0.0 2025-02-19 09:26:49.586289 | orchestrator | 2025-02-19 09:26:52.066391 | orchestrator | ## Containers @ testbed-node-0 2025-02-19 09:26:52.066594 | orchestrator | 2025-02-19 09:26:52.066645 | orchestrator | + [[ -1 -eq -1 ]] 2025-02-19 09:26:52.066672 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-19 09:26:52.066699 | orchestrator | + echo 2025-02-19 09:26:52.066726 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-02-19 09:26:52.066754 | orchestrator | + echo 2025-02-19 09:26:52.066780 | orchestrator | + osism container testbed-node-0 ps 2025-02-19 09:26:52.066837 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-19 09:26:52.066898 | orchestrator | 4d621fad9e90 registry.osism.tech/kolla/octavia-worker:2024.1 "dumb-init --single-…" 2 minutes ago Up 2 minutes (healthy) octavia_worker 2025-02-19 09:26:52.066927 | orchestrator | d4497e7e459d registry.osism.tech/kolla/octavia-housekeeping:2024.1 "dumb-init --single-…" 2 minutes ago Up 2 minutes (healthy) octavia_housekeeping 2025-02-19 09:26:52.066953 | orchestrator | 723c6d06bd7d registry.osism.tech/kolla/octavia-health-manager:2024.1 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) octavia_health_manager 2025-02-19 09:26:52.066979 | orchestrator | 14cbc3c88176 registry.osism.tech/kolla/octavia-driver-agent:2024.1 "dumb-init --single-…" 3 minutes ago Up 3 minutes octavia_driver_agent 2025-02-19 09:26:52.067005 | orchestrator | a0effb1792ee registry.osism.tech/kolla/octavia-api:2024.1 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) octavia_api 2025-02-19 09:26:52.067031 | orchestrator | 7b8690600d22 registry.osism.tech/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-02-19 09:26:52.067056 | orchestrator | ef31660e71fb registry.osism.tech/kolla/grafana:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-02-19 09:26:52.067082 | orchestrator | 36eb9dbbdbb2 registry.osism.tech/kolla/magnum-api:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-02-19 09:26:52.067108 | orchestrator | 6dbbcea2a24a registry.osism.tech/kolla/placement-api:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2025-02-19 09:26:52.067134 | orchestrator | 52736dd7f40b registry.osism.tech/kolla/designate-worker:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-02-19 09:26:52.067160 | orchestrator | 698c81ee43a5 registry.osism.tech/kolla/designate-mdns:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-02-19 09:26:52.067185 | orchestrator | 1bdad671a517 registry.osism.tech/kolla/designate-producer:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-02-19 09:26:52.067210 | orchestrator | 60a2fc23148c registry.osism.tech/kolla/designate-central:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-02-19 09:26:52.067236 | orchestrator | 6c11b45dfc0b registry.osism.tech/kolla/nova-compute-ironic:2024.1 "dumb-init --single-…" 9 minutes ago Up 6 seconds (health: starting) nova_compute_ironic 2025-02-19 09:26:52.067262 | orchestrator | 5e0f73455dc1 registry.osism.tech/kolla/designate-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-02-19 09:26:52.067292 | orchestrator | 857cc10f8980 registry.osism.tech/kolla/ironic-neutron-agent:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (unhealthy) ironic_neutron_agent 2025-02-19 09:26:52.067318 | orchestrator | 7872c6ae0dd3 registry.osism.tech/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-02-19 09:26:52.067344 | orchestrator | 8a2bb8f90b66 registry.osism.tech/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-02-19 09:26:52.067377 | orchestrator | 2419e29b4a3b registry.osism.tech/kolla/nova-conductor:2024.1 "dumb-init --single-…" 10 minutes ago Up 8 minutes (healthy) nova_conductor 2025-02-19 09:26:52.067495 | orchestrator | ac88ab902b0f registry.osism.tech/kolla/neutron-server:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-02-19 09:26:52.067549 | orchestrator | 87b57b9a629f registry.osism.tech/kolla/nova-api:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-02-19 09:26:52.067577 | orchestrator | 142ab812c2b1 registry.osism.tech/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 12 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-02-19 09:26:52.067601 | orchestrator | a1a16ac60e5b registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-02-19 09:26:52.067624 | orchestrator | ea3a70c13438 registry.osism.tech/kolla/barbican-worker:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-02-19 09:26:52.067639 | orchestrator | 33609d0519c0 registry.osism.tech/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-02-19 09:26:52.067654 | orchestrator | 3eb835b5549e registry.osism.tech/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-02-19 09:26:52.067668 | orchestrator | f1ee34087a3e registry.osism.tech/kolla/barbican-api:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-02-19 09:26:52.067682 | orchestrator | 5c6345265485 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-02-19 09:26:52.067696 | orchestrator | 9f0e049d5a6c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-02-19 09:26:52.067717 | orchestrator | 04aea1d4f83c registry.osism.tech/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-02-19 09:26:52.067732 | orchestrator | c47632664c2e registry.osism.tech/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_scheduler 2025-02-19 09:26:52.067746 | orchestrator | f23041f5b7b6 registry.osism.tech/kolla/cinder-api:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2025-02-19 09:26:52.067760 | orchestrator | 4dec14a4db1d registry.osism.tech/kolla/keystone:2024.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-02-19 09:26:52.067775 | orchestrator | b52130fbaff0 registry.osism.tech/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-02-19 09:26:52.067789 | orchestrator | 65b3ad9c2ef2 registry.osism.tech/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-02-19 09:26:52.067803 | orchestrator | e5abeff34c35 registry.osism.tech/kolla/horizon:2024.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (unhealthy) horizon 2025-02-19 09:26:52.067817 | orchestrator | 94565647d997 registry.osism.tech/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-02-19 09:26:52.067831 | orchestrator | 1bba7383f169 registry.osism.tech/kolla/mariadb-clustercheck:2024.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes mariadb_clustercheck 2025-02-19 09:26:52.067855 | orchestrator | 20371b2c2174 registry.osism.tech/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch_dashboards 2025-02-19 09:26:52.067870 | orchestrator | 3b8672ebb8d6 registry.osism.tech/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2025-02-19 09:26:52.067889 | orchestrator | ecef535e5aef registry.osism.tech/kolla/opensearch:2024.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) opensearch 2025-02-19 09:26:52.067913 | orchestrator | 27b1b85cbe82 registry.osism.tech/kolla/keepalived:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2025-02-19 09:26:52.067946 | orchestrator | 3714b770fd68 registry.osism.tech/kolla/haproxy:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2025-02-19 09:26:52.418438 | orchestrator | 64536ba551c5 registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 29 minutes ago Up 29 minutes ceph-mgr-testbed-node-0 2025-02-19 09:26:52.418529 | orchestrator | ec8531d2e7b8 registry.osism.tech/kolla/ovn-northd:2024.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_northd 2025-02-19 09:26:52.418538 | orchestrator | 503d814a20c2 registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-0 2025-02-19 09:26:52.418545 | orchestrator | 211034c08d41 registry.osism.tech/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 31 minutes ago Up 30 minutes ovn_sb_db 2025-02-19 09:26:52.418559 | orchestrator | 459efde86b2d registry.osism.tech/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes ovn_nb_db 2025-02-19 09:26:52.418564 | orchestrator | 3cff285a9f6c registry.osism.tech/kolla/ovn-controller:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_controller 2025-02-19 09:26:52.418570 | orchestrator | 94b97dc56a35 registry.osism.tech/kolla/rabbitmq:2024.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) rabbitmq 2025-02-19 09:26:52.418575 | orchestrator | 461a72f92c48 registry.osism.tech/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) openvswitch_vswitchd 2025-02-19 09:26:52.418581 | orchestrator | 767a8c2a05e5 registry.osism.tech/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) openvswitch_db 2025-02-19 09:26:52.418586 | orchestrator | 195c902d2d0a registry.osism.tech/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) redis_sentinel 2025-02-19 09:26:52.418591 | orchestrator | 424d2b0698dc registry.osism.tech/kolla/redis:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) redis 2025-02-19 09:26:52.418596 | orchestrator | c746fc5816fb registry.osism.tech/kolla/memcached:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) memcached 2025-02-19 09:26:52.418601 | orchestrator | ff994ef1cfcf registry.osism.tech/kolla/cron:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes cron 2025-02-19 09:26:52.418606 | orchestrator | 4b5ddf1ec544 registry.osism.tech/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes kolla_toolbox 2025-02-19 09:26:52.418629 | orchestrator | 94b776015fd6 registry.osism.tech/kolla/fluentd:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes fluentd 2025-02-19 09:26:52.418647 | orchestrator | 2025-02-19 09:26:54.972302 | orchestrator | ## Images @ testbed-node-0 2025-02-19 09:26:54.972458 | orchestrator | 2025-02-19 09:26:54.972488 | orchestrator | + echo 2025-02-19 09:26:54.972506 | orchestrator | + echo '## Images @ testbed-node-0' 2025-02-19 09:26:54.972526 | orchestrator | + echo 2025-02-19 09:26:54.972544 | orchestrator | + osism container testbed-node-0 images 2025-02-19 09:26:54.972582 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-19 09:26:54.972603 | orchestrator | registry.osism.tech/osism/ceph-daemon quincy a767d52e1d4c 6 hours ago 1.94GB 2025-02-19 09:26:54.972621 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.1 3288e32ffa4c 8 hours ago 492MB 2025-02-19 09:26:54.972638 | orchestrator | registry.osism.tech/kolla/cron 2024.1 98ff1ec9784a 8 hours ago 387MB 2025-02-19 09:26:54.972657 | orchestrator | registry.osism.tech/kolla/haproxy 2024.1 5e4f85889bad 8 hours ago 399MB 2025-02-19 09:26:54.972669 | orchestrator | registry.osism.tech/kolla/opensearch 2024.1 98f10340a5df 8 hours ago 2.63GB 2025-02-19 09:26:54.972680 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.1 fd7d1be573b9 8 hours ago 2.25GB 2025-02-19 09:26:54.972691 | orchestrator | registry.osism.tech/kolla/memcached 2024.1 0f028ade7d7f 8 hours ago 387MB 2025-02-19 09:26:54.972702 | orchestrator | registry.osism.tech/kolla/fluentd 2024.1 07c7aabf23f8 8 hours ago 789MB 2025-02-19 09:26:54.972712 | orchestrator | registry.osism.tech/kolla/grafana 2024.1 03da0cffb400 8 hours ago 1.15GB 2025-02-19 09:26:54.972722 | orchestrator | registry.osism.tech/kolla/keepalived 2024.1 664de01dd0e2 8 hours ago 401MB 2025-02-19 09:26:54.972732 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.1 028abe724e22 8 hours ago 969MB 2025-02-19 09:26:54.972742 | orchestrator | registry.osism.tech/kolla/redis 2024.1 ae5277186fa1 8 hours ago 394MB 2025-02-19 09:26:54.972753 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.1 0c2e8a111244 8 hours ago 394MB 2025-02-19 09:26:54.972763 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.1 cbbaa43f92fa 8 hours ago 406MB 2025-02-19 09:26:54.972773 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.1 995e008ceaef 8 hours ago 406MB 2025-02-19 09:26:54.972783 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.1 0a5db6608f13 8 hours ago 446MB 2025-02-19 09:26:54.972793 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.1 ac6316ddfec3 8 hours ago 421MB 2025-02-19 09:26:54.972804 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.1 aea72b8425d6 8 hours ago 433MB 2025-02-19 09:26:54.972814 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.1 dc241803e050 8 hours ago 524MB 2025-02-19 09:26:54.972824 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.1 f7a2235b5450 8 hours ago 430MB 2025-02-19 09:26:54.972835 | orchestrator | registry.osism.tech/kolla/ironic-inspector 2024.1 ba05260046a0 8 hours ago 1.37GB 2025-02-19 09:26:54.972847 | orchestrator | registry.osism.tech/kolla/horizon 2024.1 e903d8408469 8 hours ago 1.61GB 2025-02-19 09:26:54.972859 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.1 46f457824fab 8 hours ago 635MB 2025-02-19 09:26:54.972881 | orchestrator | registry.osism.tech/kolla/mariadb-clustercheck 2024.1 a4af7cf39329 8 hours ago 428MB 2025-02-19 09:26:54.972914 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.1 d7b858e2d70d 8 hours ago 1.12GB 2025-02-19 09:26:54.972927 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.1 d81baf00774d 8 hours ago 1.12GB 2025-02-19 09:26:54.972938 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.1 2fb32dcd7c3c 8 hours ago 1.12GB 2025-02-19 09:26:54.972950 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.1 08208813418f 8 hours ago 1.12GB 2025-02-19 09:26:54.972962 | orchestrator | registry.osism.tech/kolla/ironic-api 2024.1 7b7c38390892 8 hours ago 1.43GB 2025-02-19 09:26:54.972973 | orchestrator | registry.osism.tech/kolla/ironic-conductor 2024.1 10d7478c0b98 8 hours ago 1.79GB 2025-02-19 09:26:54.972985 | orchestrator | registry.osism.tech/kolla/ironic-pxe 2024.1 5ae1afbd0c85 8 hours ago 1.52GB 2025-02-19 09:26:54.972996 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.1 92aa60736327 8 hours ago 1.4GB 2025-02-19 09:26:54.973008 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.1 ee0705caef58 8 hours ago 1.45GB 2025-02-19 09:26:54.973020 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.1 5bbdab109cee 8 hours ago 1.32GB 2025-02-19 09:26:54.973032 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.1 1c1e065f274c 8 hours ago 1.32GB 2025-02-19 09:26:54.973043 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.1 d79fe278ec83 8 hours ago 1.32GB 2025-02-19 09:26:54.973055 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.1 00795cc8c444 8 hours ago 1.32GB 2025-02-19 09:26:54.973076 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.1 c9fe3c166ca7 8 hours ago 1.51GB 2025-02-19 09:26:55.230712 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.1 abf32211b8c0 8 hours ago 1.66GB 2025-02-19 09:26:55.230831 | orchestrator | registry.osism.tech/kolla/designate-central 2024.1 5ebd0456d2af 8 hours ago 1.34GB 2025-02-19 09:26:55.230851 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.1 64660af52da9 8 hours ago 1.34GB 2025-02-19 09:26:55.230867 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.1 be4c57f33b38 8 hours ago 1.34GB 2025-02-19 09:26:55.230882 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.1 9b561a5decf5 8 hours ago 1.34GB 2025-02-19 09:26:55.230897 | orchestrator | registry.osism.tech/kolla/designate-api 2024.1 74da4ba7f935 8 hours ago 1.34GB 2025-02-19 09:26:55.230911 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.1 c8416100133d 8 hours ago 1.34GB 2025-02-19 09:26:55.230926 | orchestrator | registry.osism.tech/kolla/placement-api 2024.1 bb6f1994c4df 8 hours ago 1.32GB 2025-02-19 09:26:55.230940 | orchestrator | registry.osism.tech/kolla/glance-api 2024.1 05eedb03d26d 8 hours ago 1.48GB 2025-02-19 09:26:55.230955 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.1 71efd452af4c 8 hours ago 1.34GB 2025-02-19 09:26:55.230969 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.1 2832c15319e8 8 hours ago 1.34GB 2025-02-19 09:26:55.230983 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.1 409ed81b3a2b 8 hours ago 1.34GB 2025-02-19 09:26:55.231010 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.1 65f09a4358b6 8 hours ago 1.4GB 2025-02-19 09:26:55.231035 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.1 721a4e6aa105 8 hours ago 1.4GB 2025-02-19 09:26:55.231051 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.1 c369b1d1d48a 8 hours ago 1.4GB 2025-02-19 09:26:55.231066 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.1 4c8e3e184a31 8 hours ago 1.43GB 2025-02-19 09:26:55.231107 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.1 d82502cddd24 8 hours ago 1.43GB 2025-02-19 09:26:55.231122 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.1 c46ac482459a 8 hours ago 1.88GB 2025-02-19 09:26:55.231136 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.1 9ae5c89e30e8 8 hours ago 1.87GB 2025-02-19 09:26:55.231154 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.1 7737752a925b 8 hours ago 1.33GB 2025-02-19 09:26:55.231169 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.1 8bcd4ca9c510 8 hours ago 1.32GB 2025-02-19 09:26:55.231184 | orchestrator | registry.osism.tech/kolla/nova-api 2024.1 545ec5b33a2d 8 hours ago 1.64GB 2025-02-19 09:26:55.231198 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.1 8f50b91f58aa 8 hours ago 1.64GB 2025-02-19 09:26:55.231212 | orchestrator | registry.osism.tech/kolla/nova-compute-ironic 2024.1 a639d851ea15 8 hours ago 1.65GB 2025-02-19 09:26:55.231227 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.1 db577a27ec33 8 hours ago 1.64GB 2025-02-19 09:26:55.231244 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.1 ea979d965cfa 8 hours ago 1.79GB 2025-02-19 09:26:55.231260 | orchestrator | registry.osism.tech/kolla/ironic-neutron-agent 2024.1 e9c79d08e8e2 8 hours ago 1.59GB 2025-02-19 09:26:55.231276 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.1 e9c7267bfa54 8 hours ago 1.6GB 2025-02-19 09:26:55.231292 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.1 a5db53dba604 8 hours ago 1.4GB 2025-02-19 09:26:55.231308 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.1 e012f71898cd 8 hours ago 1.4GB 2025-02-19 09:26:55.231324 | orchestrator | registry.osism.tech/kolla/keystone 2024.1 eeb97e5dfe72 8 hours ago 1.44GB 2025-02-19 09:26:55.231339 | orchestrator | registry.osism.tech/kolla/heat-engine 2024.1 28985acf8fe5 3 weeks ago 1.42GB 2025-02-19 09:26:55.231355 | orchestrator | registry.osism.tech/kolla/heat-api 2024.1 16b892c4d907 3 weeks ago 1.42GB 2025-02-19 09:26:55.231372 | orchestrator | registry.osism.tech/kolla/heat-api-cfn 2024.1 d0a8ac2a912f 3 weeks ago 1.42GB 2025-02-19 09:26:55.231406 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-19 09:26:55.278004 | orchestrator | ++ semver latest 5.0.0 2025-02-19 09:26:55.278199 | orchestrator | 2025-02-19 09:26:57.693794 | orchestrator | ## Containers @ testbed-node-1 2025-02-19 09:26:57.693920 | orchestrator | 2025-02-19 09:26:57.693941 | orchestrator | + [[ -1 -eq -1 ]] 2025-02-19 09:26:57.693957 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-19 09:26:57.693973 | orchestrator | + echo 2025-02-19 09:26:57.693988 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-02-19 09:26:57.694004 | orchestrator | + echo 2025-02-19 09:26:57.694076 | orchestrator | + osism container testbed-node-1 ps 2025-02-19 09:26:57.694112 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-19 09:26:57.694131 | orchestrator | 7609fcf8402b registry.osism.tech/kolla/octavia-worker:2024.1 "dumb-init --single-…" 2 minutes ago Up 2 minutes (healthy) octavia_worker 2025-02-19 09:26:57.694148 | orchestrator | e884eaa0ba00 registry.osism.tech/kolla/octavia-housekeeping:2024.1 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) octavia_housekeeping 2025-02-19 09:26:57.694186 | orchestrator | 3be01f74cbbb registry.osism.tech/kolla/octavia-health-manager:2024.1 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) octavia_health_manager 2025-02-19 09:26:57.694224 | orchestrator | 87aab056c778 registry.osism.tech/kolla/octavia-driver-agent:2024.1 "dumb-init --single-…" 3 minutes ago Up 3 minutes octavia_driver_agent 2025-02-19 09:26:57.694280 | orchestrator | eca623a0f904 registry.osism.tech/kolla/octavia-api:2024.1 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) octavia_api 2025-02-19 09:26:57.694305 | orchestrator | 16aa231877eb registry.osism.tech/kolla/grafana:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-02-19 09:26:57.694331 | orchestrator | 5ae43f55ba2b registry.osism.tech/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-02-19 09:26:57.694355 | orchestrator | f0ff1e0e7040 registry.osism.tech/kolla/magnum-api:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-02-19 09:26:57.694382 | orchestrator | 7450696dc97a registry.osism.tech/kolla/placement-api:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2025-02-19 09:26:57.694450 | orchestrator | aeab39716d82 registry.osism.tech/kolla/designate-worker:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-02-19 09:26:57.694471 | orchestrator | e3ea555f6f4d registry.osism.tech/kolla/designate-mdns:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-02-19 09:26:57.694487 | orchestrator | 6e73b315747e registry.osism.tech/kolla/designate-producer:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-02-19 09:26:57.694503 | orchestrator | cfb181a21d94 registry.osism.tech/kolla/designate-central:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-02-19 09:26:57.694519 | orchestrator | 9e28362c33e9 registry.osism.tech/kolla/designate-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-02-19 09:26:57.694534 | orchestrator | 5e7ac9fe0f68 registry.osism.tech/kolla/ironic-neutron-agent:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (unhealthy) ironic_neutron_agent 2025-02-19 09:26:57.694550 | orchestrator | 02d72629f8e8 registry.osism.tech/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_backend_bind9 2025-02-19 09:26:57.694565 | orchestrator | eb280764f85c registry.osism.tech/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-02-19 09:26:57.694581 | orchestrator | 25f0302b0e4e registry.osism.tech/kolla/nova-conductor:2024.1 "dumb-init --single-…" 10 minutes ago Up 8 minutes (healthy) nova_conductor 2025-02-19 09:26:57.694597 | orchestrator | 07632beb66c0 registry.osism.tech/kolla/neutron-server:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-02-19 09:26:57.694613 | orchestrator | ba63a460a6b2 registry.osism.tech/kolla/nova-api:2024.1 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) nova_api 2025-02-19 09:26:57.694628 | orchestrator | b8b628b30b2a registry.osism.tech/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 12 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-02-19 09:26:57.694657 | orchestrator | 43b65372b8bc registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-02-19 09:26:57.694675 | orchestrator | 8fe448254144 registry.osism.tech/kolla/barbican-worker:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-02-19 09:26:57.694691 | orchestrator | 400158803abb registry.osism.tech/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-02-19 09:26:57.694716 | orchestrator | 07d860db2bb3 registry.osism.tech/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-02-19 09:26:57.694731 | orchestrator | c96275ab41e0 registry.osism.tech/kolla/barbican-api:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-02-19 09:26:57.694745 | orchestrator | b67cc95f1480 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-02-19 09:26:57.694760 | orchestrator | 482a8c470f2a registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-02-19 09:26:57.694774 | orchestrator | 060ffd6209aa registry.osism.tech/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-02-19 09:26:57.694788 | orchestrator | 28ad24e6f5c4 registry.osism.tech/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_scheduler 2025-02-19 09:26:57.694802 | orchestrator | 1457f2568a52 registry.osism.tech/kolla/cinder-api:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2025-02-19 09:26:57.694816 | orchestrator | 4e48c2694de2 registry.osism.tech/kolla/keystone:2024.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-02-19 09:26:57.694831 | orchestrator | 683210721c02 registry.osism.tech/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-02-19 09:26:57.694849 | orchestrator | 243a217691ec registry.osism.tech/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-02-19 09:26:57.694864 | orchestrator | 34b37daae89f registry.osism.tech/kolla/horizon:2024.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (unhealthy) horizon 2025-02-19 09:26:57.694878 | orchestrator | e30e6752b6a4 registry.osism.tech/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2025-02-19 09:26:57.694893 | orchestrator | c901831f9f2f registry.osism.tech/kolla/mariadb-clustercheck:2024.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes mariadb_clustercheck 2025-02-19 09:26:57.694907 | orchestrator | 2e1b88b47033 registry.osism.tech/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch_dashboards 2025-02-19 09:26:57.694921 | orchestrator | a5ab39c70304 registry.osism.tech/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-1 2025-02-19 09:26:57.694935 | orchestrator | 2eb66fe09f4c registry.osism.tech/kolla/opensearch:2024.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) opensearch 2025-02-19 09:26:57.694949 | orchestrator | 709c60372067 registry.osism.tech/kolla/keepalived:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2025-02-19 09:26:57.694963 | orchestrator | 1e863b2e3407 registry.osism.tech/kolla/haproxy:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2025-02-19 09:26:57.694978 | orchestrator | 27420904e41f registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 29 minutes ago Up 29 minutes ceph-mgr-testbed-node-1 2025-02-19 09:26:57.695005 | orchestrator | c4ffe4aa565f registry.osism.tech/kolla/ovn-northd:2024.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_northd 2025-02-19 09:26:58.044468 | orchestrator | e3cd9b49c1a8 registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 31 minutes ago Up 31 minutes ceph-mon-testbed-node-1 2025-02-19 09:26:58.044584 | orchestrator | 679d00446e1f registry.osism.tech/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 31 minutes ago Up 30 minutes ovn_sb_db 2025-02-19 09:26:58.044604 | orchestrator | 7f4de72a4d45 registry.osism.tech/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 31 minutes ago Up 30 minutes ovn_nb_db 2025-02-19 09:26:58.044619 | orchestrator | 053f7990b019 registry.osism.tech/kolla/ovn-controller:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_controller 2025-02-19 09:26:58.044635 | orchestrator | bca4dd8f9e5c registry.osism.tech/kolla/rabbitmq:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) rabbitmq 2025-02-19 09:26:58.044653 | orchestrator | 4d85d741cb84 registry.osism.tech/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) openvswitch_vswitchd 2025-02-19 09:26:58.044667 | orchestrator | 25a2fd08754e registry.osism.tech/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) openvswitch_db 2025-02-19 09:26:58.044681 | orchestrator | 60cfce8c627c registry.osism.tech/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) redis_sentinel 2025-02-19 09:26:58.044697 | orchestrator | 1094bdc940a3 registry.osism.tech/kolla/redis:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) redis 2025-02-19 09:26:58.044712 | orchestrator | 1de313dc1f27 registry.osism.tech/kolla/memcached:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) memcached 2025-02-19 09:26:58.044726 | orchestrator | 63199b803c69 registry.osism.tech/kolla/cron:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes cron 2025-02-19 09:26:58.044740 | orchestrator | 8770da8b4864 registry.osism.tech/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes kolla_toolbox 2025-02-19 09:26:58.044754 | orchestrator | f75459896a2b registry.osism.tech/kolla/fluentd:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes fluentd 2025-02-19 09:26:58.044785 | orchestrator | 2025-02-19 09:27:00.520397 | orchestrator | ## Images @ testbed-node-1 2025-02-19 09:27:00.520516 | orchestrator | 2025-02-19 09:27:00.520527 | orchestrator | + echo 2025-02-19 09:27:00.520534 | orchestrator | + echo '## Images @ testbed-node-1' 2025-02-19 09:27:00.520541 | orchestrator | + echo 2025-02-19 09:27:00.520548 | orchestrator | + osism container testbed-node-1 images 2025-02-19 09:27:00.520565 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-19 09:27:00.520574 | orchestrator | registry.osism.tech/osism/ceph-daemon quincy a767d52e1d4c 6 hours ago 1.94GB 2025-02-19 09:27:00.520580 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.1 3288e32ffa4c 8 hours ago 492MB 2025-02-19 09:27:00.520586 | orchestrator | registry.osism.tech/kolla/cron 2024.1 98ff1ec9784a 8 hours ago 387MB 2025-02-19 09:27:00.520592 | orchestrator | registry.osism.tech/kolla/haproxy 2024.1 5e4f85889bad 8 hours ago 399MB 2025-02-19 09:27:00.520598 | orchestrator | registry.osism.tech/kolla/opensearch 2024.1 98f10340a5df 8 hours ago 2.63GB 2025-02-19 09:27:00.520620 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.1 fd7d1be573b9 8 hours ago 2.25GB 2025-02-19 09:27:00.520635 | orchestrator | registry.osism.tech/kolla/memcached 2024.1 0f028ade7d7f 8 hours ago 387MB 2025-02-19 09:27:00.520641 | orchestrator | registry.osism.tech/kolla/fluentd 2024.1 07c7aabf23f8 8 hours ago 789MB 2025-02-19 09:27:00.520647 | orchestrator | registry.osism.tech/kolla/grafana 2024.1 03da0cffb400 8 hours ago 1.15GB 2025-02-19 09:27:00.520654 | orchestrator | registry.osism.tech/kolla/keepalived 2024.1 664de01dd0e2 8 hours ago 401MB 2025-02-19 09:27:00.520660 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.1 028abe724e22 8 hours ago 969MB 2025-02-19 09:27:00.520666 | orchestrator | registry.osism.tech/kolla/redis 2024.1 ae5277186fa1 8 hours ago 394MB 2025-02-19 09:27:00.520673 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.1 0c2e8a111244 8 hours ago 394MB 2025-02-19 09:27:00.520679 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.1 cbbaa43f92fa 8 hours ago 406MB 2025-02-19 09:27:00.520685 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.1 995e008ceaef 8 hours ago 406MB 2025-02-19 09:27:00.520691 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.1 0a5db6608f13 8 hours ago 446MB 2025-02-19 09:27:00.520697 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.1 ac6316ddfec3 8 hours ago 421MB 2025-02-19 09:27:00.520703 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.1 aea72b8425d6 8 hours ago 433MB 2025-02-19 09:27:00.520709 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.1 dc241803e050 8 hours ago 524MB 2025-02-19 09:27:00.520715 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.1 f7a2235b5450 8 hours ago 430MB 2025-02-19 09:27:00.520721 | orchestrator | registry.osism.tech/kolla/horizon 2024.1 e903d8408469 8 hours ago 1.61GB 2025-02-19 09:27:00.520727 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.1 46f457824fab 8 hours ago 635MB 2025-02-19 09:27:00.520733 | orchestrator | registry.osism.tech/kolla/mariadb-clustercheck 2024.1 a4af7cf39329 8 hours ago 428MB 2025-02-19 09:27:00.520739 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.1 d7b858e2d70d 8 hours ago 1.12GB 2025-02-19 09:27:00.520745 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.1 d81baf00774d 8 hours ago 1.12GB 2025-02-19 09:27:00.520751 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.1 2fb32dcd7c3c 8 hours ago 1.12GB 2025-02-19 09:27:00.520757 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.1 08208813418f 8 hours ago 1.12GB 2025-02-19 09:27:00.520763 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.1 c9fe3c166ca7 8 hours ago 1.51GB 2025-02-19 09:27:00.520770 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.1 abf32211b8c0 8 hours ago 1.66GB 2025-02-19 09:27:00.520776 | orchestrator | registry.osism.tech/kolla/designate-central 2024.1 5ebd0456d2af 8 hours ago 1.34GB 2025-02-19 09:27:00.520782 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.1 64660af52da9 8 hours ago 1.34GB 2025-02-19 09:27:00.520788 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.1 be4c57f33b38 8 hours ago 1.34GB 2025-02-19 09:27:00.520794 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.1 9b561a5decf5 8 hours ago 1.34GB 2025-02-19 09:27:00.520800 | orchestrator | registry.osism.tech/kolla/designate-api 2024.1 74da4ba7f935 8 hours ago 1.34GB 2025-02-19 09:27:00.520808 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.1 c8416100133d 8 hours ago 1.34GB 2025-02-19 09:27:00.520818 | orchestrator | registry.osism.tech/kolla/placement-api 2024.1 bb6f1994c4df 8 hours ago 1.32GB 2025-02-19 09:27:00.520824 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.1 71efd452af4c 8 hours ago 1.34GB 2025-02-19 09:27:00.520835 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.1 2832c15319e8 8 hours ago 1.34GB 2025-02-19 09:27:00.807391 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.1 409ed81b3a2b 8 hours ago 1.34GB 2025-02-19 09:27:00.807553 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.1 65f09a4358b6 8 hours ago 1.4GB 2025-02-19 09:27:00.807573 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.1 721a4e6aa105 8 hours ago 1.4GB 2025-02-19 09:27:00.807588 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.1 c369b1d1d48a 8 hours ago 1.4GB 2025-02-19 09:27:00.807603 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.1 4c8e3e184a31 8 hours ago 1.43GB 2025-02-19 09:27:00.807618 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.1 d82502cddd24 8 hours ago 1.43GB 2025-02-19 09:27:00.807632 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.1 c46ac482459a 8 hours ago 1.88GB 2025-02-19 09:27:00.807646 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.1 9ae5c89e30e8 8 hours ago 1.87GB 2025-02-19 09:27:00.807661 | orchestrator | registry.osism.tech/kolla/nova-api 2024.1 545ec5b33a2d 8 hours ago 1.64GB 2025-02-19 09:27:00.807675 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.1 8f50b91f58aa 8 hours ago 1.64GB 2025-02-19 09:27:00.807697 | orchestrator | registry.osism.tech/kolla/nova-compute-ironic 2024.1 a639d851ea15 8 hours ago 1.65GB 2025-02-19 09:27:00.807720 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.1 db577a27ec33 8 hours ago 1.64GB 2025-02-19 09:27:00.807765 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.1 ea979d965cfa 8 hours ago 1.79GB 2025-02-19 09:27:00.807791 | orchestrator | registry.osism.tech/kolla/ironic-neutron-agent 2024.1 e9c79d08e8e2 8 hours ago 1.59GB 2025-02-19 09:27:00.807813 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.1 e9c7267bfa54 8 hours ago 1.6GB 2025-02-19 09:27:00.807828 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.1 a5db53dba604 8 hours ago 1.4GB 2025-02-19 09:27:00.807842 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.1 e012f71898cd 8 hours ago 1.4GB 2025-02-19 09:27:00.807857 | orchestrator | registry.osism.tech/kolla/keystone 2024.1 eeb97e5dfe72 8 hours ago 1.44GB 2025-02-19 09:27:00.807889 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-02-19 09:27:00.859567 | orchestrator | ++ semver latest 5.0.0 2025-02-19 09:27:00.859691 | orchestrator | + [[ -1 -eq -1 ]] 2025-02-19 09:27:03.274602 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-19 09:27:03.274700 | orchestrator | 2025-02-19 09:27:03.274713 | orchestrator | ## Containers @ testbed-node-2 2025-02-19 09:27:03.274724 | orchestrator | 2025-02-19 09:27:03.274735 | orchestrator | + echo 2025-02-19 09:27:03.274746 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-02-19 09:27:03.274758 | orchestrator | + echo 2025-02-19 09:27:03.274769 | orchestrator | + osism container testbed-node-2 ps 2025-02-19 09:27:03.274794 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-02-19 09:27:03.274808 | orchestrator | 65fe3024122a registry.osism.tech/kolla/octavia-worker:2024.1 "dumb-init --single-…" 3 minutes ago Up 2 minutes (healthy) octavia_worker 2025-02-19 09:27:03.274830 | orchestrator | a9298912dd8c registry.osism.tech/kolla/octavia-housekeeping:2024.1 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) octavia_housekeeping 2025-02-19 09:27:03.274861 | orchestrator | 478aee4eb95b registry.osism.tech/kolla/octavia-health-manager:2024.1 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) octavia_health_manager 2025-02-19 09:27:03.274872 | orchestrator | 3679a691237f registry.osism.tech/kolla/octavia-driver-agent:2024.1 "dumb-init --single-…" 3 minutes ago Up 3 minutes octavia_driver_agent 2025-02-19 09:27:03.274883 | orchestrator | 30b57bc93463 registry.osism.tech/kolla/octavia-api:2024.1 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) octavia_api 2025-02-19 09:27:03.274894 | orchestrator | 92f0a8c5c5ce registry.osism.tech/kolla/grafana:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-02-19 09:27:03.274904 | orchestrator | 5e14fb9124fe registry.osism.tech/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-02-19 09:27:03.274915 | orchestrator | a1b6df63a45a registry.osism.tech/kolla/magnum-api:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-02-19 09:27:03.274928 | orchestrator | f2171518a2ca registry.osism.tech/kolla/placement-api:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2025-02-19 09:27:03.274938 | orchestrator | f33e99b8b074 registry.osism.tech/kolla/designate-worker:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-02-19 09:27:03.274948 | orchestrator | 3b13183f7a66 registry.osism.tech/kolla/designate-mdns:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-02-19 09:27:03.274959 | orchestrator | 8f6174d26f16 registry.osism.tech/kolla/designate-producer:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-02-19 09:27:03.274970 | orchestrator | 60cd8821f5e7 registry.osism.tech/kolla/designate-central:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-02-19 09:27:03.274980 | orchestrator | c37859ec7983 registry.osism.tech/kolla/designate-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-02-19 09:27:03.274991 | orchestrator | d6b4adb69056 registry.osism.tech/kolla/ironic-neutron-agent:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (unhealthy) ironic_neutron_agent 2025-02-19 09:27:03.275001 | orchestrator | 45e9f711b131 registry.osism.tech/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-02-19 09:27:03.275011 | orchestrator | 7bab7bb7cabf registry.osism.tech/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-02-19 09:27:03.275022 | orchestrator | 9bc0eb1705f1 registry.osism.tech/kolla/nova-conductor:2024.1 "dumb-init --single-…" 10 minutes ago Up 8 minutes (healthy) nova_conductor 2025-02-19 09:27:03.275032 | orchestrator | 0bf13f43f073 registry.osism.tech/kolla/neutron-server:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-02-19 09:27:03.275043 | orchestrator | fec135bba7f9 registry.osism.tech/kolla/nova-api:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-02-19 09:27:03.275053 | orchestrator | decc210a31fd registry.osism.tech/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 12 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-02-19 09:27:03.275072 | orchestrator | ed24343ca776 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-02-19 09:27:03.275092 | orchestrator | 1a763644ccae registry.osism.tech/kolla/barbican-worker:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-02-19 09:27:03.275103 | orchestrator | bc6fa15dbf78 registry.osism.tech/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 13 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-02-19 09:27:03.275114 | orchestrator | c55c15ae5029 registry.osism.tech/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-02-19 09:27:03.275125 | orchestrator | f6ede92a3652 registry.osism.tech/kolla/barbican-api:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2025-02-19 09:27:03.275135 | orchestrator | 41030b4f6f47 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-02-19 09:27:03.275146 | orchestrator | 5b7d7dd5a934 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-02-19 09:27:03.275157 | orchestrator | a0fb45e16f4e registry.osism.tech/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-02-19 09:27:03.275169 | orchestrator | fa0205de90f0 registry.osism.tech/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_scheduler 2025-02-19 09:27:03.275181 | orchestrator | 4a38746383b6 registry.osism.tech/kolla/cinder-api:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) cinder_api 2025-02-19 09:27:03.275193 | orchestrator | d24dc438f31d registry.osism.tech/kolla/keystone:2024.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-02-19 09:27:03.275205 | orchestrator | 16d83fec3b02 registry.osism.tech/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2025-02-19 09:27:03.275217 | orchestrator | 77f01df53575 registry.osism.tech/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2025-02-19 09:27:03.275229 | orchestrator | 78ccca2f11ee registry.osism.tech/kolla/horizon:2024.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (unhealthy) horizon 2025-02-19 09:27:03.275240 | orchestrator | 8ef5af00525f registry.osism.tech/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2025-02-19 09:27:03.275253 | orchestrator | 5424655b9e3c registry.osism.tech/kolla/mariadb-clustercheck:2024.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes mariadb_clustercheck 2025-02-19 09:27:03.275265 | orchestrator | 6a2a7335b2bf registry.osism.tech/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch_dashboards 2025-02-19 09:27:03.275276 | orchestrator | 8335164c432d registry.osism.tech/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-2 2025-02-19 09:27:03.275288 | orchestrator | 65d13c2612ed registry.osism.tech/kolla/opensearch:2024.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) opensearch 2025-02-19 09:27:03.275300 | orchestrator | 358eca35993b registry.osism.tech/kolla/keepalived:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes keepalived 2025-02-19 09:27:03.275317 | orchestrator | b83156ce894c registry.osism.tech/kolla/haproxy:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) haproxy 2025-02-19 09:27:03.275329 | orchestrator | 51f76217708f registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 29 minutes ago Up 29 minutes ceph-mgr-testbed-node-2 2025-02-19 09:27:03.275345 | orchestrator | 15bbdd8ce3da registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-2 2025-02-19 09:27:03.559825 | orchestrator | 73d011a27017 registry.osism.tech/kolla/ovn-northd:2024.1 "dumb-init --single-…" 31 minutes ago Up 30 minutes ovn_northd 2025-02-19 09:27:03.559977 | orchestrator | 68c724a4021b registry.osism.tech/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 31 minutes ago Up 30 minutes ovn_sb_db 2025-02-19 09:27:03.560010 | orchestrator | f3189575dce8 registry.osism.tech/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 31 minutes ago Up 30 minutes ovn_nb_db 2025-02-19 09:27:03.560037 | orchestrator | ab4432d2758d registry.osism.tech/kolla/rabbitmq:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) rabbitmq 2025-02-19 09:27:03.560076 | orchestrator | 28de6352d727 registry.osism.tech/kolla/ovn-controller:2024.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes ovn_controller 2025-02-19 09:27:03.560101 | orchestrator | a3374f75eeea registry.osism.tech/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) openvswitch_vswitchd 2025-02-19 09:27:03.560125 | orchestrator | 8e9888e7d788 registry.osism.tech/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) openvswitch_db 2025-02-19 09:27:03.560147 | orchestrator | a1c2d4881f04 registry.osism.tech/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) redis_sentinel 2025-02-19 09:27:03.560175 | orchestrator | aa40cc217f71 registry.osism.tech/kolla/redis:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) redis 2025-02-19 09:27:03.560201 | orchestrator | f7227b044d80 registry.osism.tech/kolla/memcached:2024.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) memcached 2025-02-19 09:27:03.560224 | orchestrator | dfb794a23d8f registry.osism.tech/kolla/cron:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes cron 2025-02-19 09:27:03.560247 | orchestrator | 0fffa15be1b3 registry.osism.tech/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes kolla_toolbox 2025-02-19 09:27:03.560271 | orchestrator | 74b3703affe2 registry.osism.tech/kolla/fluentd:2024.1 "dumb-init --single-…" 35 minutes ago Up 35 minutes fluentd 2025-02-19 09:27:03.560317 | orchestrator | 2025-02-19 09:27:05.892850 | orchestrator | ## Images @ testbed-node-2 2025-02-19 09:27:05.892981 | orchestrator | 2025-02-19 09:27:05.893004 | orchestrator | + echo 2025-02-19 09:27:05.893028 | orchestrator | + echo '## Images @ testbed-node-2' 2025-02-19 09:27:05.893045 | orchestrator | + echo 2025-02-19 09:27:05.893059 | orchestrator | + osism container testbed-node-2 images 2025-02-19 09:27:05.893095 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-02-19 09:27:05.893113 | orchestrator | registry.osism.tech/osism/ceph-daemon quincy a767d52e1d4c 6 hours ago 1.94GB 2025-02-19 09:27:05.893127 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.1 3288e32ffa4c 8 hours ago 492MB 2025-02-19 09:27:05.893166 | orchestrator | registry.osism.tech/kolla/cron 2024.1 98ff1ec9784a 8 hours ago 387MB 2025-02-19 09:27:05.893181 | orchestrator | registry.osism.tech/kolla/haproxy 2024.1 5e4f85889bad 8 hours ago 399MB 2025-02-19 09:27:05.893195 | orchestrator | registry.osism.tech/kolla/opensearch 2024.1 98f10340a5df 8 hours ago 2.63GB 2025-02-19 09:27:05.893209 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.1 fd7d1be573b9 8 hours ago 2.25GB 2025-02-19 09:27:05.893235 | orchestrator | registry.osism.tech/kolla/memcached 2024.1 0f028ade7d7f 8 hours ago 387MB 2025-02-19 09:27:05.893251 | orchestrator | registry.osism.tech/kolla/fluentd 2024.1 07c7aabf23f8 8 hours ago 789MB 2025-02-19 09:27:05.893274 | orchestrator | registry.osism.tech/kolla/grafana 2024.1 03da0cffb400 8 hours ago 1.15GB 2025-02-19 09:27:05.893297 | orchestrator | registry.osism.tech/kolla/keepalived 2024.1 664de01dd0e2 8 hours ago 401MB 2025-02-19 09:27:05.893322 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.1 028abe724e22 8 hours ago 969MB 2025-02-19 09:27:05.893347 | orchestrator | registry.osism.tech/kolla/redis 2024.1 ae5277186fa1 8 hours ago 394MB 2025-02-19 09:27:05.893371 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.1 0c2e8a111244 8 hours ago 394MB 2025-02-19 09:27:05.893394 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.1 cbbaa43f92fa 8 hours ago 406MB 2025-02-19 09:27:05.893410 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.1 995e008ceaef 8 hours ago 406MB 2025-02-19 09:27:05.893461 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.1 0a5db6608f13 8 hours ago 446MB 2025-02-19 09:27:05.893477 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.1 ac6316ddfec3 8 hours ago 421MB 2025-02-19 09:27:05.893491 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.1 aea72b8425d6 8 hours ago 433MB 2025-02-19 09:27:05.893505 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.1 dc241803e050 8 hours ago 524MB 2025-02-19 09:27:05.893520 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.1 f7a2235b5450 8 hours ago 430MB 2025-02-19 09:27:05.893534 | orchestrator | registry.osism.tech/kolla/horizon 2024.1 e903d8408469 8 hours ago 1.61GB 2025-02-19 09:27:05.893548 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.1 46f457824fab 8 hours ago 635MB 2025-02-19 09:27:05.893563 | orchestrator | registry.osism.tech/kolla/mariadb-clustercheck 2024.1 a4af7cf39329 8 hours ago 428MB 2025-02-19 09:27:05.893577 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.1 d7b858e2d70d 8 hours ago 1.12GB 2025-02-19 09:27:05.893592 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.1 d81baf00774d 8 hours ago 1.12GB 2025-02-19 09:27:05.893606 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.1 2fb32dcd7c3c 8 hours ago 1.12GB 2025-02-19 09:27:05.893620 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.1 08208813418f 8 hours ago 1.12GB 2025-02-19 09:27:05.893634 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.1 c9fe3c166ca7 8 hours ago 1.51GB 2025-02-19 09:27:05.893648 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.1 abf32211b8c0 8 hours ago 1.66GB 2025-02-19 09:27:05.893662 | orchestrator | registry.osism.tech/kolla/designate-central 2024.1 5ebd0456d2af 8 hours ago 1.34GB 2025-02-19 09:27:05.893676 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.1 64660af52da9 8 hours ago 1.34GB 2025-02-19 09:27:05.893690 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.1 be4c57f33b38 8 hours ago 1.34GB 2025-02-19 09:27:05.893713 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.1 9b561a5decf5 8 hours ago 1.34GB 2025-02-19 09:27:05.893727 | orchestrator | registry.osism.tech/kolla/designate-api 2024.1 74da4ba7f935 8 hours ago 1.34GB 2025-02-19 09:27:05.893746 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.1 c8416100133d 8 hours ago 1.34GB 2025-02-19 09:27:05.893760 | orchestrator | registry.osism.tech/kolla/placement-api 2024.1 bb6f1994c4df 8 hours ago 1.32GB 2025-02-19 09:27:05.893775 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.1 71efd452af4c 8 hours ago 1.34GB 2025-02-19 09:27:05.893802 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.1 2832c15319e8 8 hours ago 1.34GB 2025-02-19 09:27:06.170898 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.1 409ed81b3a2b 8 hours ago 1.34GB 2025-02-19 09:27:06.171024 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.1 65f09a4358b6 8 hours ago 1.4GB 2025-02-19 09:27:06.171057 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.1 721a4e6aa105 8 hours ago 1.4GB 2025-02-19 09:27:06.171084 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.1 c369b1d1d48a 8 hours ago 1.4GB 2025-02-19 09:27:06.171111 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.1 4c8e3e184a31 8 hours ago 1.43GB 2025-02-19 09:27:06.171135 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.1 d82502cddd24 8 hours ago 1.43GB 2025-02-19 09:27:06.171160 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.1 c46ac482459a 8 hours ago 1.88GB 2025-02-19 09:27:06.171184 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.1 9ae5c89e30e8 8 hours ago 1.87GB 2025-02-19 09:27:06.171208 | orchestrator | registry.osism.tech/kolla/nova-api 2024.1 545ec5b33a2d 8 hours ago 1.64GB 2025-02-19 09:27:06.171232 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.1 8f50b91f58aa 8 hours ago 1.64GB 2025-02-19 09:27:06.171257 | orchestrator | registry.osism.tech/kolla/nova-compute-ironic 2024.1 a639d851ea15 8 hours ago 1.65GB 2025-02-19 09:27:06.171287 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.1 db577a27ec33 8 hours ago 1.64GB 2025-02-19 09:27:06.171352 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.1 ea979d965cfa 8 hours ago 1.79GB 2025-02-19 09:27:06.171376 | orchestrator | registry.osism.tech/kolla/ironic-neutron-agent 2024.1 e9c79d08e8e2 8 hours ago 1.59GB 2025-02-19 09:27:06.171404 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.1 e9c7267bfa54 8 hours ago 1.6GB 2025-02-19 09:27:06.171420 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.1 a5db53dba604 8 hours ago 1.4GB 2025-02-19 09:27:06.171464 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.1 e012f71898cd 8 hours ago 1.4GB 2025-02-19 09:27:06.171479 | orchestrator | registry.osism.tech/kolla/keystone 2024.1 eeb97e5dfe72 8 hours ago 1.44GB 2025-02-19 09:27:06.171512 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-02-19 09:27:06.177486 | orchestrator | + set -e 2025-02-19 09:27:06.179017 | orchestrator | + source /opt/manager-vars.sh 2025-02-19 09:27:06.179087 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-19 09:27:06.188994 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-19 09:27:06.189048 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-19 09:27:06.189064 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-19 09:27:06.189083 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-19 09:27:06.189099 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-19 09:27:06.189114 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-19 09:27:06.189143 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-19 09:27:06.189158 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-19 09:27:06.189208 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-19 09:27:06.189223 | orchestrator | ++ export ARA=false 2025-02-19 09:27:06.189237 | orchestrator | ++ ARA=false 2025-02-19 09:27:06.189252 | orchestrator | ++ export TEMPEST=false 2025-02-19 09:27:06.189266 | orchestrator | ++ TEMPEST=false 2025-02-19 09:27:06.189280 | orchestrator | ++ export IS_ZUUL=true 2025-02-19 09:27:06.189295 | orchestrator | ++ IS_ZUUL=true 2025-02-19 09:27:06.189309 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-02-19 09:27:06.189323 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.77 2025-02-19 09:27:06.189337 | orchestrator | ++ export EXTERNAL_API=false 2025-02-19 09:27:06.189352 | orchestrator | ++ EXTERNAL_API=false 2025-02-19 09:27:06.189366 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-19 09:27:06.189380 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-19 09:27:06.189394 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-19 09:27:06.189408 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-19 09:27:06.189462 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-19 09:27:06.189495 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-19 09:27:06.189510 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-02-19 09:27:06.189524 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-02-19 09:27:06.189549 | orchestrator | + set -e 2025-02-19 09:27:06.190529 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-19 09:27:06.190557 | orchestrator | ++ export INTERACTIVE=false 2025-02-19 09:27:06.190573 | orchestrator | ++ INTERACTIVE=false 2025-02-19 09:27:06.190589 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-19 09:27:06.190604 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-19 09:27:06.190619 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-02-19 09:27:06.190641 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-02-19 09:27:06.234815 | orchestrator | 2025-02-19 09:27:06.236044 | orchestrator | # Ceph status 2025-02-19 09:27:06.236105 | orchestrator | 2025-02-19 09:27:06.236131 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-19 09:27:06.236178 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-19 09:27:06.236201 | orchestrator | + echo 2025-02-19 09:27:06.236225 | orchestrator | + echo '# Ceph status' 2025-02-19 09:27:06.236250 | orchestrator | + echo 2025-02-19 09:27:06.236275 | orchestrator | + ceph -s 2025-02-19 09:27:06.236311 | orchestrator | /opt/configuration/scripts/check/100-ceph-with-ansible.sh: line 12: ceph: command not found 2025-02-19 09:27:06.689415 | orchestrator | ERROR 2025-02-19 09:27:06.689885 | orchestrator | { 2025-02-19 09:27:06.689991 | orchestrator | "delta": "0:00:22.796857", 2025-02-19 09:27:06.690064 | orchestrator | "end": "2025-02-19 09:27:06.242102", 2025-02-19 09:27:06.690131 | orchestrator | "msg": "non-zero return code", 2025-02-19 09:27:06.690193 | orchestrator | "rc": 127, 2025-02-19 09:27:06.690248 | orchestrator | "start": "2025-02-19 09:26:43.445245" 2025-02-19 09:27:06.690303 | orchestrator | } failure 2025-02-19 09:27:06.708582 | 2025-02-19 09:27:06.708700 | PLAY RECAP 2025-02-19 09:27:06.708808 | orchestrator | ok: 23 changed: 10 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-02-19 09:27:06.708847 | 2025-02-19 09:27:06.948611 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-02-19 09:27:06.953860 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-02-19 09:27:07.654144 | 2025-02-19 09:27:07.654318 | PLAY [Post output play] 2025-02-19 09:27:07.683890 | 2025-02-19 09:27:07.684050 | LOOP [stage-output : Register sources] 2025-02-19 09:27:07.770449 | 2025-02-19 09:27:07.770833 | TASK [stage-output : Check sudo] 2025-02-19 09:27:08.524239 | orchestrator | sudo: a password is required 2025-02-19 09:27:08.827068 | orchestrator | ok: Runtime: 0:00:00.018175 2025-02-19 09:27:08.844862 | 2025-02-19 09:27:08.845002 | LOOP [stage-output : Set source and destination for files and folders] 2025-02-19 09:27:08.896219 | 2025-02-19 09:27:08.896540 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-02-19 09:27:08.988680 | orchestrator | ok 2025-02-19 09:27:08.999765 | 2025-02-19 09:27:08.999906 | LOOP [stage-output : Ensure target folders exist] 2025-02-19 09:27:09.453930 | orchestrator | ok: "docs" 2025-02-19 09:27:09.454311 | 2025-02-19 09:27:09.698552 | orchestrator | ok: "artifacts" 2025-02-19 09:27:09.927390 | orchestrator | ok: "logs" 2025-02-19 09:27:09.953341 | 2025-02-19 09:27:09.953533 | LOOP [stage-output : Copy files and folders to staging folder] 2025-02-19 09:27:09.994413 | 2025-02-19 09:27:09.994682 | TASK [stage-output : Make all log files readable] 2025-02-19 09:27:10.322997 | orchestrator | ok 2025-02-19 09:27:10.333870 | 2025-02-19 09:27:10.334010 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-02-19 09:27:10.379467 | orchestrator | skipping: Conditional result was False 2025-02-19 09:27:10.395260 | 2025-02-19 09:27:10.395419 | TASK [stage-output : Discover log files for compression] 2025-02-19 09:27:10.421200 | orchestrator | skipping: Conditional result was False 2025-02-19 09:27:10.435934 | 2025-02-19 09:27:10.436059 | LOOP [stage-output : Archive everything from logs] 2025-02-19 09:27:10.523186 | 2025-02-19 09:27:10.523339 | PLAY [Post cleanup play] 2025-02-19 09:27:10.547365 | 2025-02-19 09:27:10.547490 | TASK [Set cloud fact (Zuul deployment)] 2025-02-19 09:27:10.618956 | orchestrator | ok 2025-02-19 09:27:10.630885 | 2025-02-19 09:27:10.631010 | TASK [Set cloud fact (local deployment)] 2025-02-19 09:27:10.666490 | orchestrator | skipping: Conditional result was False 2025-02-19 09:27:10.687135 | 2025-02-19 09:27:10.687268 | TASK [Clean the cloud environment] 2025-02-19 09:27:11.362127 | orchestrator | 2025-02-19 09:27:11 - clean up servers 2025-02-19 09:27:12.212794 | orchestrator | 2025-02-19 09:27:12 - testbed-manager 2025-02-19 09:27:12.303403 | orchestrator | 2025-02-19 09:27:12 - testbed-node-5 2025-02-19 09:27:12.401587 | orchestrator | 2025-02-19 09:27:12 - testbed-node-4 2025-02-19 09:27:12.492107 | orchestrator | 2025-02-19 09:27:12 - testbed-node-3 2025-02-19 09:27:12.607835 | orchestrator | 2025-02-19 09:27:12 - testbed-node-0 2025-02-19 09:27:12.730851 | orchestrator | 2025-02-19 09:27:12 - testbed-node-1 2025-02-19 09:27:12.828687 | orchestrator | 2025-02-19 09:27:12 - testbed-node-2 2025-02-19 09:27:12.935336 | orchestrator | 2025-02-19 09:27:12 - clean up keypairs 2025-02-19 09:27:12.955174 | orchestrator | 2025-02-19 09:27:12 - testbed 2025-02-19 09:27:12.981592 | orchestrator | 2025-02-19 09:27:12 - wait for servers to be gone 2025-02-19 09:27:24.213061 | orchestrator | 2025-02-19 09:27:24 - clean up ports 2025-02-19 09:27:24.434251 | orchestrator | 2025-02-19 09:27:24 - 19761e93-447a-4d24-bf37-bdfeb7b4d25b 2025-02-19 09:27:24.689979 | orchestrator | 2025-02-19 09:27:24 - 57ae0d35-d09a-4ff4-8d67-47b12cab3fd9 2025-02-19 09:27:24.938110 | orchestrator | 2025-02-19 09:27:24 - 7ef4ef32-4f6f-4cf6-8aeb-f4ef199a9a06 2025-02-19 09:27:25.157370 | orchestrator | 2025-02-19 09:27:25 - 9e45572f-f065-44e7-9db5-6b5d239e3c1e 2025-02-19 09:27:25.513968 | orchestrator | 2025-02-19 09:27:25 - dc372622-e2f7-4bb8-917d-acee4137ac73 2025-02-19 09:27:25.755939 | orchestrator | 2025-02-19 09:27:25 - f6ac2a28-a206-47e3-a850-5bd9b390209a 2025-02-19 09:27:25.970997 | orchestrator | 2025-02-19 09:27:25 - f925c5d2-dcc7-43b6-aced-6094d884843a 2025-02-19 09:27:26.160762 | orchestrator | 2025-02-19 09:27:26 - clean up volumes 2025-02-19 09:27:26.293754 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-2-node-base 2025-02-19 09:27:26.329870 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-5-node-base 2025-02-19 09:27:26.375517 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-3-node-base 2025-02-19 09:27:26.417096 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-0-node-base 2025-02-19 09:27:26.457522 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-4-node-base 2025-02-19 09:27:26.503858 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-1-node-base 2025-02-19 09:27:26.542609 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-5-node-5 2025-02-19 09:27:26.585583 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-7-node-1 2025-02-19 09:27:26.627871 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-9-node-3 2025-02-19 09:27:26.667018 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-1-node-1 2025-02-19 09:27:26.706924 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-16-node-4 2025-02-19 09:27:26.755489 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-manager-base 2025-02-19 09:27:26.800868 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-4-node-4 2025-02-19 09:27:26.844444 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-0-node-0 2025-02-19 09:27:26.885216 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-11-node-5 2025-02-19 09:27:26.925021 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-15-node-3 2025-02-19 09:27:26.967123 | orchestrator | 2025-02-19 09:27:26 - testbed-volume-17-node-5 2025-02-19 09:27:27.011494 | orchestrator | 2025-02-19 09:27:27 - testbed-volume-10-node-4 2025-02-19 09:27:27.056784 | orchestrator | 2025-02-19 09:27:27 - testbed-volume-6-node-0 2025-02-19 09:27:27.095108 | orchestrator | 2025-02-19 09:27:27 - testbed-volume-3-node-3 2025-02-19 09:27:27.137225 | orchestrator | 2025-02-19 09:27:27 - testbed-volume-12-node-0 2025-02-19 09:27:27.176359 | orchestrator | 2025-02-19 09:27:27 - testbed-volume-2-node-2 2025-02-19 09:27:27.218378 | orchestrator | 2025-02-19 09:27:27 - testbed-volume-8-node-2 2025-02-19 09:27:27.262469 | orchestrator | 2025-02-19 09:27:27 - testbed-volume-14-node-2 2025-02-19 09:27:27.309062 | orchestrator | 2025-02-19 09:27:27 - testbed-volume-13-node-1 2025-02-19 09:27:27.349534 | orchestrator | 2025-02-19 09:27:27 - disconnect routers 2025-02-19 09:27:27.418341 | orchestrator | 2025-02-19 09:27:27 - testbed 2025-02-19 09:27:28.206584 | orchestrator | 2025-02-19 09:27:28 - clean up subnets 2025-02-19 09:27:28.240282 | orchestrator | 2025-02-19 09:27:28 - subnet-testbed-management 2025-02-19 09:27:28.379214 | orchestrator | 2025-02-19 09:27:28 - clean up networks 2025-02-19 09:27:28.566546 | orchestrator | 2025-02-19 09:27:28 - net-testbed-management 2025-02-19 09:27:28.825332 | orchestrator | 2025-02-19 09:27:28 - clean up security groups 2025-02-19 09:27:28.855833 | orchestrator | 2025-02-19 09:27:28 - testbed-management 2025-02-19 09:27:28.938985 | orchestrator | 2025-02-19 09:27:28 - testbed-node 2025-02-19 09:27:29.016562 | orchestrator | 2025-02-19 09:27:29 - clean up floating ips 2025-02-19 09:27:29.042231 | orchestrator | 2025-02-19 09:27:29 - 81.163.192.77 2025-02-19 09:27:29.407782 | orchestrator | 2025-02-19 09:27:29 - clean up routers 2025-02-19 09:27:29.451371 | orchestrator | 2025-02-19 09:27:29 - testbed 2025-02-19 09:27:30.293318 | orchestrator | changed 2025-02-19 09:27:30.338211 | 2025-02-19 09:27:30.338353 | PLAY RECAP 2025-02-19 09:27:30.338439 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-02-19 09:27:30.338481 | 2025-02-19 09:27:30.451263 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-02-19 09:27:30.458991 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-02-19 09:27:31.160002 | 2025-02-19 09:27:31.160172 | PLAY [Base post-fetch] 2025-02-19 09:27:31.190332 | 2025-02-19 09:27:31.190486 | TASK [fetch-output : Set log path for multiple nodes] 2025-02-19 09:27:31.257384 | orchestrator | skipping: Conditional result was False 2025-02-19 09:27:31.274302 | 2025-02-19 09:27:31.274484 | TASK [fetch-output : Set log path for single node] 2025-02-19 09:27:31.320919 | orchestrator | ok 2025-02-19 09:27:31.330392 | 2025-02-19 09:27:31.330521 | LOOP [fetch-output : Ensure local output dirs] 2025-02-19 09:27:31.801112 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/1b46a84af940463697c0e33a75af0ed4/work/logs" 2025-02-19 09:27:32.091122 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1b46a84af940463697c0e33a75af0ed4/work/artifacts" 2025-02-19 09:27:32.389434 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1b46a84af940463697c0e33a75af0ed4/work/docs" 2025-02-19 09:27:32.411173 | 2025-02-19 09:27:32.411303 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-02-19 09:27:33.244663 | orchestrator | changed: .d..t...... ./ 2025-02-19 09:27:33.245009 | orchestrator | changed: All items complete 2025-02-19 09:27:33.245059 | 2025-02-19 09:27:33.868001 | orchestrator | changed: .d..t...... ./ 2025-02-19 09:27:34.444596 | orchestrator | changed: .d..t...... ./ 2025-02-19 09:27:34.476245 | 2025-02-19 09:27:34.476463 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-02-19 09:27:34.528469 | orchestrator | skipping: Conditional result was False 2025-02-19 09:27:34.535367 | orchestrator | skipping: Conditional result was False 2025-02-19 09:27:34.596985 | 2025-02-19 09:27:34.597210 | PLAY RECAP 2025-02-19 09:27:34.597361 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-02-19 09:27:34.597443 | 2025-02-19 09:27:34.718289 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-02-19 09:27:34.721702 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-02-19 09:27:35.408882 | 2025-02-19 09:27:35.409037 | PLAY [Base post] 2025-02-19 09:27:35.437706 | 2025-02-19 09:27:35.437846 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-02-19 09:27:36.531300 | orchestrator | changed 2025-02-19 09:27:36.572239 | 2025-02-19 09:27:36.572365 | PLAY RECAP 2025-02-19 09:27:36.572457 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-02-19 09:27:36.572521 | 2025-02-19 09:27:36.685311 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-02-19 09:27:36.693232 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-02-19 09:27:37.449480 | 2025-02-19 09:27:37.449640 | PLAY [Base post-logs] 2025-02-19 09:27:37.466098 | 2025-02-19 09:27:37.466221 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-02-19 09:27:37.931587 | localhost | changed 2025-02-19 09:27:37.938928 | 2025-02-19 09:27:37.939129 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-02-19 09:27:37.980224 | localhost | ok 2025-02-19 09:27:37.988790 | 2025-02-19 09:27:37.988919 | TASK [Set zuul-log-path fact] 2025-02-19 09:27:38.007655 | localhost | ok 2025-02-19 09:27:38.022412 | 2025-02-19 09:27:38.022528 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-19 09:27:38.068667 | localhost | skipping: Conditional result was False 2025-02-19 09:27:38.074365 | 2025-02-19 09:27:38.074530 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-19 09:27:38.115189 | localhost | ok 2025-02-19 09:27:38.120205 | 2025-02-19 09:27:38.120357 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-19 09:27:38.158278 | localhost | skipping: Conditional result was False 2025-02-19 09:27:38.167325 | 2025-02-19 09:27:38.167543 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-19 09:27:38.185622 | localhost | skipping: Conditional result was False 2025-02-19 09:27:38.191100 | 2025-02-19 09:27:38.191255 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-19 09:27:38.216747 | localhost | skipping: Conditional result was False 2025-02-19 09:27:38.223695 | 2025-02-19 09:27:38.223871 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-19 09:27:38.249914 | localhost | skipping: Conditional result was False 2025-02-19 09:27:38.260691 | 2025-02-19 09:27:38.260897 | TASK [upload-logs : Create log directories] 2025-02-19 09:27:38.795610 | localhost | changed 2025-02-19 09:27:38.803634 | 2025-02-19 09:27:38.803812 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-02-19 09:27:39.345371 | localhost -> localhost | ok: Runtime: 0:00:00.007322 2025-02-19 09:27:39.357648 | 2025-02-19 09:27:39.357863 | TASK [upload-logs : Upload logs to log server] 2025-02-19 09:27:39.945680 | localhost | Output suppressed because no_log was given 2025-02-19 09:27:39.951666 | 2025-02-19 09:27:39.951916 | LOOP [upload-logs : Compress console log and json output] 2025-02-19 09:27:40.017791 | localhost | skipping: Conditional result was False 2025-02-19 09:27:40.034666 | localhost | skipping: Conditional result was False 2025-02-19 09:27:40.049335 | 2025-02-19 09:27:40.049540 | LOOP [upload-logs : Upload compressed console log and json output] 2025-02-19 09:27:40.124053 | localhost | skipping: Conditional result was False 2025-02-19 09:27:40.124791 | 2025-02-19 09:27:40.135018 | localhost | skipping: Conditional result was False 2025-02-19 09:27:40.146495 | 2025-02-19 09:27:40.146693 | LOOP [upload-logs : Upload console log and json output]